LabSim Chapter 9
docx
keyboard_arrow_up
School
Nova Southeastern University *
*We aren’t endorsed by this school
Course
615
Subject
Information Systems
Date
Apr 3, 2024
Type
docx
Pages
66
Uploaded by gioroa20
9.1.1 Incident Response Process
Click one of the buttons to take you to that part of the video.
Incident Response 00:00-00:22
Incident response is a systematic approach to handling and managing the aftermath of a security breach or cyberattack, also known as an IT incident, computer incident, or security incident. The goal is to handle the situation in a way that limits damage and reduces recovery time and costs.
Incident Response Lifecycle 00:22-00:45
The incident response lifecycle provides a structured approach to addressing and managing the aftermath of a security breach or cyberattack to limit damage and reduce recovery time and costs. The incident response lifecycle includes seven steps: preparation, detection, analysis, containment, eradication, recovery, and lessons learned.
Preparation 00:45-01:17
Part of the preparation step is to ensure that systems are resilient to attack. This includes hardening systems, writing policies and procedures, and setting up confidential lines of communication.
The preparation step also includes creating incident response resources and procedures. An Incident Response Plan is like a fire drill for cyber threats. Just as a fire drill outlines steps to evacuate a building safely, an Incident Response Plan contains instructions on how to react when a cyber-attack occurs.
Detection 01:17-01:37
The detection step discovers indicators of threat actor activity. Indicators that an incident may have occurred could be generated from an automated intrusion system or other monitoring and alerting systems. Incidents can also be detected using threat hunting methods or by reports made by employees or customers.
Containment 01:37-02:08
The containment phase is where immediate action is taken to prevent further damage or compromise of the system. This could involve disconnecting affected systems or devices from the network to prevent the spread of the breach. It's the equivalent of stopping the bleeding in a medical emergency, a quick and temporary fix to halt the immediate threat. Containment strategies can vary based on the severity and nature of the incident. After containment, an in-depth investigation leads to the next phase—eradication.
Recovery 02:08-02:49
Next is the recovery phase. During this phase, systems and networks are returned to their normal function. All systems affected by the cyberattack are cleaned, restored, and put back into operation. Recovery may involve reinstalling system components, changing passwords, and
patching software. System administrators should carefully monitor systems during recovery for any signs of abnormal activity, as this could indicate that not all threat elements have been successfully eradicated. Regular operations can resume once the systems are deemed secure and functioning normally. The recovery phase isn't considered complete until all systems are back operational and all data has been recovered.
Lessons Learned 02:49-03:23
The last phase in the incident response lifecycle is the "lessons learned" stage. During this phase, the incident response team conducts a post-incident review. This review aims to identify what went well during the response, what could've been done better, and what improvements can be made to the incident response process. It provides an opportunity to learn from the incident and improve future response efforts. The lessons learned may include changes to policies, procedures, or infrastructure. They may lead to further security and awareness training for employees.
Summary 03:23-03:39
That's it for this lesson. In this lesson, we've discussed incident response. We looked at the seven steps of the incident response lifecycle: preparation, detection, analysis, containment, eradication, recovery, and lessons learned.
9.1.2 Incident Response Process Facts
This lesson covers the following topics:
Security incident
Incident response process
Security Incident
A security incident is an event or series of events resulting from a security policy violation. The incident may or may not adversely affect an organization's ability to conduct business. It is crucial to organizations that security incidents are recognized and dealt with appropriately. The following table describes types of security incidents.
Type
Description
Employee errors
Unintentional actions by an employee that cause damage or leave network systems vulnerable to attack.
Unauthorized act by an employee
Intentional actions by an employee to cause harm to a company's network or data. This is also known as an insider threat.
External intrusion attempts Intentional actions by a threat actor not employed by or associated with an organization to exploit attack vectors. The threat actor's intent is to harm an organization or profit from access to an organization's resources.
Virus and harmful code Tools used by threat actors to disrupt company business, compromise data, or hurt the
Type
Description
attacks
company's reputation
Unethical gathering of competitive information
This is also known as corporate espionage. The goal is to obtain proprietary information to obtain a competitive advantage or steal clients.
Incident Response Process
A cybersecurity incident refers to either a successful or attempted violation of the security properties of an asset, compromising its confidentiality, integrity, or availability. Incident response (IR) policy sets the resources, processes, and guidelines for dealing with cybersecurity incidents. Management of each incident should follow a process lifecycle. CompTIA's incident response lifecycle is a seven-step process:
Preparation — makes the system resilient to attack in the first place. This includes hardening systems, writing policies and procedures, and setting up confidential lines of communication. It also implies creating incident response resources and procedures.
Detection — discovers indicators of threat actor activity. Indicators that an incident may have occurred might be generated from an automated intrusion system. Alternatively, incidents might be manually detected through threat hunting operations or be reported by employees, customers, or law enforcement.
Analysis — determines whether an incident has occurred and performs triage to assess how severe it might be from the data reported as indicators.
Containment — limit the scope and magnitude of the incident. Incident response aims to secure data while limiting the immediate impact on customers and business partners. It is also necessary to notify stakeholders and identify other reporting requirements.
Eradication — removes the cause and restores the affected system to a secure state by applying secure configuration settings and installing patches once the incident is contained.
Recovery — reintegrates the system into the business process it supports with the cause of the incident eradicated. This recovery phase may involve restoring data from backup and security testing. Systems must be monitored closely to detect and prevent any reoccurrence of the attack. The response process may have to iterate through multiple phases of identification, containment, eradication, and recovery to affect a complete resolution.
Lessons learned — analyzes the incident and responses to identify whether procedures or systems could be improved. It is imperative to document the incident. Outputs from this phase feed back into a new preparation phase in the cycle.
Incident response likely requires coordinated action and authorization from several departments or managers, which adds further complexity. The IR process is focused on cybersecurity incidents. There are also significant incidents that pose an existential threat to company-wide operations. These major incidents are handled by disaster recovery processes. However, a cybersecurity incident might lead to a major incident being declared.
9.1.3 Isolate and Contain
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Click one of the buttons to take you to that part of the video.
Isolate, Containment and Segmentation 00:00-00:10
Today, we'll be discussing isolation, containment, and segmentation within network security. Let's get started.
Isolation 00:10-00:49
Isolation is limiting the ability of a compromised asset or application from doing more harm to the network or its assets. This can be accomplished in a few different ways. One way is to practice process isolation. This ensures that if a process is compromised, only the resources used by that process are at risk. This practice applies to operating systems as well as RAM. In other words, it prevents any process that is limited by access bounds from accessing the resources of another process. This is a trait of a stable operating system. Isolation is considered a preventative security measure since it's implemented before an event is detected.
Containment 00:49-01:53
Containment is the first step after an event has been detected and identified. This action can take a few forms. An IT admin may disconnect a machine from the network by simply unplugging the Ethernet cable or disabling the NIC. If this network is connected to other networks, this connection may be terminated. The decision to disconnect must be weighed against the amount of data being compromised and the potential loss of forensic evidence. No matter what, the goal of containment is to limit the damage potential of malicious activity.
Containment requires action. Once an IT security analyst detects and identifies a malicious event, they must act. In this scenario, the analyst is monitoring a physical server that must be manually disconnected from network. This means the on-site IT Admin must jump into action as quickly as possible. Time is of the essence since this event threatens the physical server and also the servers in the branch office. This is because the two networks are connected via a VPN. Containment requires that the damage be limited—even if it means taking a server down.
Segmentation 01:53-02:41
Segmentation is a strategic network design. The concept is simple: keep sections of a network separated so that malicious actors can't pivot within a network. Segmentation can be accomplished through VLANs, software-defined networks, switches, subnetting, or even physical segmentation.
But simply being on a different subnet is not enough. Rules must be implemented to control what kind of communications can occur between assets on the network. Many times, a network admin will
create a DMZ. This a virtual area where assets are kept separate from internal network assets. A network with a DMZ may have a single firewall or two firewalls depending on how secure this segment needs to be. No matter the topography, access between the DMZ and the internal network is secure and controlled.
Summary 02:41-02:58
That's it for this lesson. We discussed isolation and how it's used to protect a network. Next, we talked about containment, which is the first action taken once an event has been detected. We ended by discussing network segmentation and how it can prevent unauthorized access.
9.1.4 Isolate and Contain Facts
This lesson covers the following topics:
Isolation, containment, and segmentation
Security Orchestration, Automation, and Response (SOAR)
Incident plans
Isolation, Containment, and Segmentation
Data, whether good or malicious, must be handled correctly. You can use isolation and containment for malicious or suspect data. You can use segmentation as a strategic network architecture tool to prevent outside data from accessing internal network appliances.
Strategy
Description
Isolation
Isolation limits the ability of a compromised process or application to do more harm to the network or its assets. One way to protect the network is process isolation. This ensures that if a process is compromised, only the resources used by that process are at risk.
Containment
Containment is the first step after an event has been detected and identified. This action can take a few forms. You can disconnect a machine from the network by unplugging the Ethernet cable or disabling the NIC. If a network is connected to other networks, you can terminate those connections.
Segmentation Segmentation is a strategic network design. The concept is simple: separate the network sections so malicious actors cannot pivot within a network. You can segment using VLANs, software-defined networks, switches, subnetting, or physical segmentation. Being on a different subnet is not enough. You must implement rules to control the kind of communications that occur between assets on the network.
You can also create a demilitarized zone (DMZ). It is a virtual area where you separate assets from internal network assets. Depending on how secure the segment needs to be, a network with a DMZ may have a single firewall or two firewalls. No matter the topography, access between the DMZ and the internal network is access-controlled.
Security Orchestration, Automation, and Response (SOAR)
SOAR is a platform to compile security data generated by different security endpoints. This collected information is then sent to a security analyst for further action. SOAR frees an analyst from constantly receiving security alerts as they are generated. Analysts can use parameters to automate solutions for security incidents that meet specific criteria. SOAR:
Gathers alert data and places it in a specified location.
Facilitates application data integration.
Facilitates focused analysis.
Creates a single security case.
Allows for multiple playbooks and playbook step automation.
Incident Plans
As part of the incident response process, you can use playbooks and runbooks together
to achieve a more effective response that can be automated and include tasks automatically assigned to analysts to complete. These two plans can also help to meet and comply with regulatory frameworks like GDPR or NIST if necessary.
Plan Type
Description
Runbooks
Runbooks are a condition-based series of protocols you can use to establish automated processes for security incident response. Assessment, investigation, and mitigation are accelerated using a runbook. Even though processes are automated, human analysis is still used in some cases.
Playbooks
A playbook is a checklist-style document specifying how to respond to a threat or incident. The steps are listed in the order to be performed. A playbook ensures a consistent approach to security issues.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.2.1 Security Information and Event Management
Click one of the buttons to take you to that part of the video.
SIEM 00:00-00:17
In this video, we'll discuss Security Information and Event Management, or SIEM, which is a tool used to compile and examine multiple data points gathered from across a network. We'll also explore log management.
Vulnerability Scan Output 00:17-00:43
Monitoring a network requires experience and solid tools. One common network security tool is a scanner that can identify vulnerabilities and recommend remediation steps. The scan delivers output
information to the IT administrators via the SIEM dashboard. The interval between scans is set by the IT department. This tool also scans servers, firewalls, switches, software programs, and even security cameras and wireless access points.
SIEM Dashboards 00:43-01:04
There are many versions of SIEMs. Each one has different features and benefits, but all SIEMs have
several features in common. One of these is the dashboard. These are generally customizable information screens that show real-time security and network information. This allows the IT security team to monitor events as they occur on the network.
Sensors play a vital role in monitoring and securing a network.
Sensors 01:04-01:19
These sensors are set up at critical endpoints, services, and other vulnerable locations. They're programmed to send customized alerts to the SEIM if certain parameters are reached or exceeded.
Before sensors are deployed, the IT security team sets their sensitivity.
Sensitivity 01:19-01:37
The benefit to variable sensitivity settings is the ability to customize the data that's sent to the SIEM. Not every company has the same needs, and that's what makes sensor-sensitivity customization so beneficial.
Trends are patterns of activity discovered and reported to the SEIM.
Trends 01:37-02:03
This is how baselines are established. These trends help security analysts decide if reported activity is normal or outside of the baseline. Trends that don't fit previously recorded ones can be
investigated. As the IT security team investigates and documents these trends, it becomes easier for
them to quickly spot an anomaly that may signal a security concern.
Alerts are the SIEMs' way of letting the IT team know that a pre-established parameter has been contravened.
Alerts 02:03-02:28
The alert is intended to get the attention of the IT person who's monitoring the network. A best practice in this area is 24-hour monitoring. This means that weekends, holidays, and early hours are all filled. Hackers don't keep normal hours, and network equipment can break at the most inconvenient times!
Event correlation is a critical part of using a SIEM solution.
Correlation 02:28-02:54
The software gathers data from log files, system applications, network appliances, and other endpoints in order to analyze it. This work is tedious, and people are inefficient at it. That's why the event-correlation feature is valuable. Not only does it gather data, but it analyzes and compares known malicious behavior against the aggregate data so that events aren't missed.
That's it for this lesson.
Summary 02:54-03:31
In this lesson, we learned about SIEMs and the features that make them a critical network component. These include customizable sensors and their placement. Trends help IT teams to establish baselines or norms for their network. Alerts provide critical information to the network monitor so that appropriate action can be taken. Event correlation automates a very laborious process and analyzes and compares aggregate data to known security behaviors and events, ensuring that nothing gets by undetected. All this is delivered to the IT team via a customizable dashboard for easy examination.
9.2.2 Log Management
Click one of the buttons to take you to that part of the video.
Log Management 00:00-00:12
In this video, we'll explore log categories, specific logs you should know, and open-source tools that are used to aid IT security teams.
Network Logs 00:12-00:29
Every network generates dozens and dozens of logs. Network logs tell us what's coming into and leaving our network. Every network appliance and almost every application produces logs which can
be used for a variety of reasons. But more often than not, we use logs for network security.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
System Logs 00:29-00:51
System logs are produced by an operating system. These logs contain all the information that pertains to that OS like updates, errors, failures, and other system occurrences. This includes information for client computers and servers, which gives IT admins a way to investigate events on individual machines that may interconnect with other machines on the network.
Application Logs 00:51-01:12
Most applications produce some type of event logging. These logs show application access, crashes, updates, and any other relevant information which could be valuable in determining root-
cause analysis. The application may be crashing or not performing correctly, and this could be tied to suspicious activity that may indicate malicious intent.
Security Logs 01:12-01:34
There are several logs that would fall under the security category. There are application-security logs, event-security logs, and security logs for specialty applications like IDS/IPS, endpoints, firewalls, routers, and switches. Also, logs for security cameras and wireless or physical access points are included under the security log umbrella.
Web Server Logs 01:34-02:03
There's no doubt that web server logs are one of the most tedious of all logs to parse. But web servers can be prime targets for hackers, so it's important to know who's interacting with your server and what they're attempting to do. Most web engines like IIS, Tomcat, Web Sphere, and NGINX have some level of server logging. These logs can tell you exactly when users log onto your site and
what their location is. They also give you some information on attempted attacks.
DNS 02:03-02:43
Domain Name System, or DNS, has been around for a long time. When DNS was designed, network
security wasn't a priority. Over time, malicious actors started using DNS-targeted attacks. These attacks have the potential to be disruptive and quite expensive. With DNS logging, you can track updates and choose to approve or deny them. DNS also produces query logs that detail which requests are being handled by which instance. Rate limiting is another valuable tool that limits response rate. Analyze these logs to see when rate limiting was used and for what purpose. Client IP, record requests, flags, and other metadata can be included in these logs.
Dump Files 02:43-03:19
Dump files are created when an application, OS, or other computer function stops abruptly. The information that's stored in memory at the time is dropped into a file for later analysis. These files help IT admins perform root-cause analysis because they provide the state that the application was in when it crashed, error codes, and other clues as to what happened previous to the application failure. They can also give clues as to the crash's origin. This could be something as commonplace as a bad driver or hardware component, or it may prove, unfortunately, to be the result of a malicious act.
Authentication Logs 03:19-03:41
Authentication logs are vital to a network's security. Authentication servers may be Active Directory-
based or OpenLDAP depending on your network structure. It's critical to know who may be poking around your network, so token requests, authentication failures, or failed logins on expired accounts are all stored on these authentication logs for you to view.
VoIP 03:41-04:01
Voice over Internet Protocol, or VoIP, has become a common network application. With a high implementation rate comes attention from malicious actors. As with any network application, there are vulnerabilities that can be leveraged, so in order to defend it there needs to be a way to access information about what's happening at any given time.
SIP Traffic 04:01-04:22
Session Initial Protocol, or SIP, is the standard in VoIP calls. The key to tracking attacks against a VoIP system is understanding SIP and being able to parse its logs. These logs contain key information about where a call was initiated and what the communication's intent was. These facts help the IT security team create a stronger SIP security posture.
Syslog/Rsyslog/Syslog-ng 04:22-05:25
Syslog is short for System Logging Protocol. This protocol sends system logs and event messages to a server designated by the system administrator. It collects logs from various appliances and sends them to the syslog server where they can be reviewed and analyzed.
Rsyslog is an open-source tool created for use in Linux networks. It stands for rocket-fast system-log
processing. It gets its name because of its ability to send a million messages per second to a local server. This tool's benefit is its diversity. It's capable of multi-threading by leveraging multiple security
protocols like TCP, TLS, and others. It also allows for output-format customization.
Syslog-ng is a robust log-aggregating software for multiple platforms, including Windows. This tool increases the quality of the log data that's sent to your SIEM. It also facilitates lightning-fast log searches by using full-text queries and collects logs without the installation of server agents.
Journalctl 05:25-06:03
Journactl is a Linux tool that gathers the logs produced by systemd, a system that's the basis for many Linux components. This command is used in Bash to parse logs that've been collected by systemd. The results are presented in the syslog format and are ordered oldest to newest by default. This can be changed by using the –r flag. Each line shows the date, server hostname, process name, and any messages. If you're more comfortable using a command line tool, this is for you. Because it's a CLI tool, there are many key commands at your disposal to get you your logs quickly.
Nxlog 06:03-06:23
Nxlog is an open-source log-collector application. It uses log-collector agents to gather and send log data to a log server, which can itself be set up using nxlog. This application is available for both Windows and Linux. The Community edition supports multiple SIEM applications and works with Windows Event Viewer and syslog.
Summary 06:23-06:46
That's it for this lesson. In this lesson, we learned about different log categories like network logs, security logs, and application logs. We also learned about specific logs and how you can use them for better security. Finally, we learned about open-source tools that assist you in collecting and organizing log data and metadata. Understanding logs is truly the key to better security.
9.2.3 SIEM and Log Management Facts
This lesson covers the following topics:
Security information and event management
Security Information and Event Management
A security information and event management (SIEM) system combines security information management (SIM) and security event management (SEM) functions into one security management system.
Security information and event management tools compile and examine multiple data points gathered across a network. The following table describes SIEM components.
Component
Description
Vulnerability scan output
Monitoring a network requires experience and solid tools. One tool standard to network security is a scanner that can identify vulnerabilities and recommend remediation steps. This tool scans servers, firewalls, switches, software programs, security cameras, and wireless access points. The scan delivers the output to IT admins via the SIEM dashboard. The interval between scans is set by the IT department.
SIEM dashboards
The dashboard is a common component of all SIEM systems. The dashboard consists of customizable screens showing real-time security and network information. The information in real-time allows the IT security team to monitor and respond to events on the network effectively .
Sensors
Sensors are a vital part of monitoring and securing a network. Sensors are set up at critical endpoints, services, and other vulnerable locations. These sensors are programmed to send customized alerts to the
SEIM if specific parameters are not within the acceptable range.
Sensitivity
The IT security team sets the sensitivity level when the sensors are deployed. The benefit of variable sensitivity settings is the ability to customize the data sent to the SIEM. Not every organization will have the same needs in network monitoring.
Trends
Trends are patterns of activity discovered and reported to the SIEM. This is how baselines are
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Component
Description
established. Trends help security analysts decide whether reported activity is normal or outside the baseline. The security group can investigate trends that do not fit previously recorded information. As the IT security team investigates and documents these trends, it becomes easier for the team to spot a trend that may signal a security event quickly.
Alerts
Alerts are the SIEM’s way of informing the IT team that a pre-established parameter is not within the acceptable range. The alert is intended to get the attention of the IT person or persons monitoring the network. A best practice in this area is 24-hour monitoring.
Correlation
Event correlation is a critical SIEM component. The software gathers data from log files, system applications, network appliances, etc., and analyzes it. This work is tedious; people are inefficient at it. That’s why the event correlation feature is valuable. Not only does it gather the data, but it analyzes and compares known malicious behavior against the aggregated data, increasing the chances of the discovery
of security events.
9.2.4 Monitoring Data and Metadata
Click one of the buttons to take you to that part of the video.
Monitoring Data and Metadata 00:00-00:20
In this lesson, we'll cover the use of bandwidth monitors and metadata from emails, mobile devices, web traffic, and files. We'll also talk about using netflow/sflow echo and Ipfix. Let's get started.
Bandwidth Monitors 00:20-01:44
Bandwidth monitors are a type of application that help network admins understand bandwidth usage. The first order of business with these applications is to establish a baseline. The longer a monitor runs, the more data points are created. When a substantial number of data points are created, a normal usage becomes apparent. Normal bandwidth usage is relative and varies by hours in the day, days of the week, and even weeks within the year. A baseline provides something concrete to compare against current usage or even suspect bandwidth usage.
There are many bandwidth monitoring applications available. There are cloud-based apps, on-
premise apps, open source, and paid. Each one looks and functions differently, but each has the same goal, which is to allow easy access to bandwidth monitoring. The key is to learn how to establish a monitoring schedule and how to use the dashboard.
Here, we have some screenshots of different pages within a dashboard. As you can see, this graphic is showing bandwidth usage be the hour and below usage per day aggregated over time. This is the baseline that suspect usage is compared against. The second graphic shows the last hour's usage and the usage for a user-specified time frame. This allows for deeper examination of suspect bandwidth usage.
Email Metadata 01:44-02:18
Email is a great tool that almost everyone uses to communicate. It's also the avenue for the majority of malicious network breaches. Fortunately, email provides metadata so it can be traced. All
emails come with a header that contains information about both the sender and recipient. Parts of the headers can be spoofed to give investigators false information. The good news is that there are security devices that put X-headers throughout an emails' headers. These provide the originating email account and IP address, not the spoofed one.
Mobile Metadata 02:18-02:52
Communications sent via mobile devices are common today. They come from tablets, laptops, smart phones, smart watches, and any other portable device that connects to the internet. These devices send emails and text messages and use apps to allow data and photo sharing. All this data produces metadata that can be used to identify people, places, times, and even deleted data. Pictures can be time stamped and geolocation stamped. Much of this metadata also reveals the origination of the data and the sender.
Web Metadata 02:52-03:35
Websites produce many types of metadata. In fact, the metadata on a user's machine versus the server can be very different. The data on the opposite side of the transmission can help fill in gaps and corroborate findings on the opposing machine. Metadata includes IP addresses, user requests, user downloads, time spent on the site, and even attempts to gain unauthorized access. Web metadata also includes cookies, browser history, and even cached pages. Many times,
malicious actors will attempt to obfuscate their real metadata. But the good news is that there are ways of finding the real metadata, especially for trained forensic investigators.
File Metadata 03:35-04:25
Files produce many types of metadata. The first kind is File Creation (date/time). The File Creation data is the first time the file was written to the storage media it's currently on. This means that the file
can be created in a different place and then moved or copied to a new location.
Last written refers to the last time a file was saved for any reason. It could prove that the file was copied from a different location or that the file was altered. Last access is any time a file is touched. When combined with file creation metadata on a user's machine, it can establish the probability that the file was copied from one machine to another. It also introduces the idea that a third device is involved in data copy process. Once the third device is found, it makes obtaining answers much easier.
Netflow/sFlow 04:25-05:01
Network admins are always looking for a way to look at what's happening inside their network. Netflow is a session sampling protocol. This protocol works at Layers 2 through 4. It can exam each data flow that comes through or be set to sample sessions at certain intervals. If you want to sample packets in a broader layer range, then sFlow is your answer. This protocol works on Layers 2 through 7. Unlike Netflow, sFlow examines packets and can only be used in sampling mode. This is stateless packet sampling that provides information efficiently.
Ipfix 05:01-05:44
IPFIX integrates data that normally goes to Syslog or SNMP information directly in the IPFIX packet, eliminating thee additional services collecting data from each network device. IPFIX has provisions
for fields that are variable-length, meaning no ID number restrictions. It came about because of the need for a standardized protocol for internal protocol flows. This data comes from routers, servers, and other network appliances that are mediation systems. The data is formatted and sent to an exporter and then on to a collector.
IPFIX, like NetFlow, looks at flow and the number of packets being sent and received during a given session.
Summary 05:44-06:05
That's it for this lesson. In this lesson, we talked about the different ways metadata is produced. We covered some of the pitfalls of metadata as well as some of the ways to overcome them. And we discussed creating a baseline using network bandwidth monitors. These monitors can alert us to abnormal activity on our network.
9.2.5 Saving Captured Files with Wireshark
Click one of the buttons to take you to that part of the video.
Save Captured Files with Wireshark 00:00-00:46
In this demo, we'll show you how to capture network packets. There are times when you might want to capture packets so you can analyze them later. Captured packets can be used by an analyst to profile an application's network traffic or to examine a protocol in more detail.
The two most popular tools to capture packets are Wireshark and TCPDump. Both Wireshark and TCPDump can be used with a variety of operating systems but for this demo we will use Security Onion.
Security Onion is usually set up with a monitor port that captures all packets that it sees. The packets are typically used with tools like Zeek and Snort. This interface can also be used by an analyst for ad-hoc captures.
Capture Packets with Wireshark 00:46-02:37
There are two capture tools in Security Onion for ad-hoc captures. First, we will look at Wireshark. Wireshark is a graphical tool that allows packet capture, but is also an analysis tool. Because Wireshark requires root permissions to capture the packets, we must run Wireshark with elevated permissions.
To do this, go to Applications > Utilities > Terminal. At the command prompt type —˜sudo dpkg-
reconfigure wireshark-common'. You are asked you if you want to allow non-superusers to be able to
capture packets. Choose Yes. Next, run —˜sudo usermod -a -G wireshark administrator'. This gives the administrative user rights to run Wireshark by adding it to the Wireshark group.
If your account on Security Onion has a different name, use that instead of administrator in the command. Once done, go to the power icon. Then, go down to the other power icon. Choose to restart.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Now the system has rebooted. After logging back into Security Onion, we can start to use Wireshark. We open it by going to Applications > Internet > Wireshark.
To set up a capture, select the interface on Security Onion that is set as the monitor port. In this instance, it is interface enp0s8. After selecting the interface, you can restart the capture by clicking the Shark Fin in the top left of the menu bar. By default, the captured packets will scroll by on the screen.
When we are done capturing, we can press the red stop button. At this point, we can analyze the captured packets as we would any other capture that has been given to us. We can then save that file as a PCAP file for later analysis. We're not going to save it here, but you can see we have the option.
Capture Packets with tcpdump 02:37-03:41
Another way that we can capture packets with Security Onion is to use the command-line tool tcpdump. Let's open a new Terminal session and enter the command —˜sudo tcpdump -D' to list the possible interfaces that we can capture on. Note that enp0s8 is number 10 on the list.
To capture, run the command —˜sudo tcpdump -i 10 -w testout.pcap'. This will capture the packets on enp0s8 and write them to a file called testout.pcap. Press CTRL-C to stop the capture.
Now let's print the file to the screen using —˜tcpdump -r testout.pcap'. Here you can see the output of the file.
We can open this PCAP file in a tool like Wireshark or NetworkMiner. Let's open it in Wireshark. We type, —˜wireshark testout.pcap' and press Enter. After a second or two the PCAP file is loaded into Wireshark and we can use it to further analyze the packet capture.
Summary 03:41-03:49
That's it for this demo. In this demo we used Wireshark and TCPdump to capture packets and saved
them to a PCAP file.
9.2.6 Use Elasticsearch Logstash Kibana
Click one of the buttons to take you to that part of the video.
Elasticsearch Logstash Kibana (ELK) 00:00-00:44
In this demo, we use the Elasticsearch Logstash Kibana (ELK) stack to store and search security logs created by other tools in Security Onion such as Zeek. We'll use Kibana to review sample logs.
Security Onion is a free and open source Linux distribution for threat hunting, enterprise security monitoring, and log management. It includes Elasticsearch, Logstash, Kibana, Snort, Suricata, Zeek (formerly known as Bro), Wazuh, Sguil, Squert, CyberChef, NetworkMiner, and many other security tools.
ELK Startup 00:44-01:21
First, we click the Kibana link on the Security Onion desktop. By default, Security Onion uses a self-
signed TLS certificate. We tell Chromium to allow us to proceed by clicking Advanced and then proceed. On a production system, it would be advisable to install a valid TLS certificate and use the fully qualified name instead of localhost.
The username and password for Security Onion are set during the initial setup. In this case, we set up a user in the Security Operations Center (SOC). We'll login as that user.
Network Intrusion Detection System (NIDS) logs 01:21-03:15
After logging in, Kibana defaults to the dashboard page. This could be used as a dashboard for a SOC or as the starting point for a threat hunter. In this case, we want to look at the Network Intrusion Detection System (NIDS) logs, so we click NIDS to drill down.
Here we notice a classification of Attempted Administrator Privilege Gain that we want to look at further. To do that, we hover on that line and click the magnifying glass with the plus sign. This adds it as a filter. To keep the filter as we pivot to other parts of Kibana, we click Actions and pin the filter.
Next, we click Discover to see the logs that match the filter. Notice how the filter has stayed with us because we pinned it. Now we can see each log entry that matched our filter. Notice that the timestamp and other information about the alert is listed.
You can view more information for an event by clicking the arrow to the left of the time stamp. From the description, we can also search for more information about the signature listed. We do this by highlighting it, right-clicking, and choosing to search for it using Google.
Some signatures are based on Common Vulnerabilities and Exposures (CVEs). CVEs describe potential security issues with certain software or hardware. They provide a common language to evaluate the risk posed. By typing CVE into the search field, you can focus the results on signatures that are tied to a CVE. Researching the CVE can help you determine whether the attempt against the asset could be successful.
Zeek 03:15-04:30
Let's look at one more example. But first, let's clear our pinned criteria by going to Actions and clicking Remove. We also need to remove the search term CVE. Once this is done, we'll go back to the dashboard.
Under Zeek Hunting, click HTTP to list alerts relating only to the HTTP protocol. Now we can look at the events based on the HTTP status messages and the methods used. Using the same technique with the magnifying glass as before, let's drill into the Forbidden messages.
Again, we want to pin the filter so that it stays with us as we move through Kibana. Notice that this time we are using a different way to pin the filter. This method allows you to pin or unpin individual filters instead of all filters.
In the details of one of the events, we can see the Uniform Resource Identifier (URI) that the attacker tried to access. We can also see the user agent of the attacker. In this case, our attacker didn't mask the fact that a common tool for finding website vulnerabilities called Nikto was being used.
Summary 04:30-04:40
That's it for this demo. In this demo we went over a few of the ways the power of Elasticsearch can be harnessed to help the security professional.
9.2.7 Use NetworkMiner
Click one of the buttons to take you to that part of the video.
Use NetworkMiner 00:00-00:16
NetworkMiner is a Network Forensic Analysis Tool. It is able to take a PCAP file and analyze it for clues about the hosts and protocols on the network at the time of the capture.
Load PCAP File 00:16-00:39
To open NetworkMiner in Security Onion, go to Applications, Other and then NetworkMiner. Once it opens, go to file open and then choose the pcap file that you want to analyze. NetworkMiner will then automatically analyze the file and present the information in a series of tabs.
Viewing Hosts 00:39-01:48
The first tab is the hosts tab. This tab will list all of the unique hosts by IP addresses that were found in the PCAP file. Each host can be opened by clicking on the plus sign next to it revealing the information about the host that NetworkMiner knows. For example, if I open 172.28.24.3 I will get it's IP, MAC Address, information about the number of packets it sent and received, and any details about the host. In this case, if I click on the Host Details, I will see the User-Agent string of the browser the host used.
In the next tab, you will find the files that were transferred between hosts during the packet capture. Right-clicking on a file will allow you to open the file or calculate the file's hash. For this capture, I created a fake virus using the EICAR test string. This string when placed in a file will be detected as a virus by most vendors. I can choose to open the folder where the virus.exe is stored and then open it using gedit to show the EICAR string.
Compare Hashes 01:48-02:37
I can also use the file hash feature to compare the file to known bad file hashes using tools like VirusTotal. First right-click on the file and choose Calculate MD5/SHA1/SHA256 hash. Next highlight
the hash you want to compare and press CTRL+C to copy it to the clipboard. In a web browser, open VirusTotal.com and click on the Search link. Then paste the hash into the search box and press enter. This will then compare the hash against known malicious files. In this case, it will be an EICAR test file that most antivirus manufacturers will mark as a virus.
Additional Features 02:37-03:26
Another tab of interest is the credentials tab. Here any clear text credentials like telnet, FTP, or HTTP will be shown. In this capture, there was an FTP session that we have the credentials for.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
The sessions tab shows the unique conversations between hosts.
Any DNS queries and responses that are captured will be decoded in the DNS tab.
The parameters tab shows many potential key-value pairs that may have important information.
Finally, NetworkMiner provides the ability to search a PCAP file for certain keywords. You can do this by clicking on Keywords. Next add a keyword and follow the instruction to reload the case file. The display will now show all of the packets that include the keyword.
Summary 03:26-03:35
That's it for this demo. In this demo we used NetworMiner to view the contents of a captured PCAP file.
9.2.8 Configuring Remote Logging on Linux
Click one of the buttons to take you to that part of the video.
Configuring Remote Logging 00:00-01:30
There are many aspects of the syslog daemon that you can modify to customize how your log files are configured. You could have warning messages going into one file, error messages going into another, or separate them based on program. But one of the cool things I think the syslog daemon can do is log to a remote host. This basically allows us to write our logs not only to our local system but also to a log host somewhere else. This is a great benefit for the administrator. You can set up a log host in your network somewhere and all the logs from all the systems you support go into that log host. If somebody's having a problem, instead of having to use SSH in their system to look at their log files, you can have one central location to view them. This can be very helpful in keeping history and tracking your log files. Syslog servers are also very helpful when there has been
an attack on your network.
That's what we're going to do here. We're going to configure the syslog daemon to log to a remote host. I have two different Linux systems running. I have a RedHat system running here that will serve as my log host and I have a CentOS system running here that will serve as my log client. The log messages from the CentOS system are going to be saved on our syslog server which
happens to be our Redhat system. Keep in mind that different Linux distributions may be slightly different. But the same concepts apply.
Configure Log Host 01:30-03:52
Let's configure the log host first. The first thing we need to do is check to see if the rsyslog daemon is running. Type ‘systemctl status rsyslog'. As you can see everything appear to be in order. We need to change to our root user account, so I'll do a 'su- root' command and enter the password. We're going to edit the configuration file with the ‘vi /etc/rsyslog.conf' command. As we scroll down, we're going to uncomment some modules. These are the imudp module and the imtcp module. These modules enable the syslog daemon to listen for incoming syslog messages.
Down at the bottom in the rules area we have a template we're going to use for our remote logging. This template isn't here by default so it has to be added. This allows the remote logs to be
placed in their own folder by host name and program name. If you don't set up a template, all syslog messages from remote hosts will go in the /var/log/messages of the syslog server. We're going to uncomment this by removing the pound symbols. Let's save it by typing ‘:wq!'.
For these changes to take effect, we must restart the rsyslog daemon by typing ‘systemctl restart rsyslog'. Since there will be incoming traffic, we need to modify the firewall to accept incoming messages on port 514.
To do so, type ‘firewall-cmd --permanent --add-port=514/udp'. When you press Enter you see that it was a success. Now we do the same thing with the TCP protocol with "arrow up" and remove UDP for TCP. The changes won't be active until we reload the firewall with the ‘firewall-cmd --reload' command.
I'm going to do a netstat to make sure the syslog daemon is listening on port 514.
Type ‘netstat -tulnp | grep "rsyslo"' and press Enter. This shows us that rsyslogd has port 514 open for UPD and TCP.
Configure Log Client 03:52-05:06
We're going to venture over to our CentOS client server that's going to be sending syslog messages to our RedHat server. First, we check to make sure the rsyslog daemon is running by typing "systemctl status rsyslog". We're good to go.
Just like our syslog server, we're going to modify the configuration file for rsyslog. Let's type ‘sudo vi /etc/rsyslog.conf'. Enter the password and press Enter. We're going to scroll all the way to the bottom to this configuration file.
Just so that I know what this is, I'm going to put a comment in there which is "#syslog server" and type ‘*.* @@192.168.0.55:514'. The *.* is a wildcard that sends all syslog messages to the syslog server. The IP address listed is the IP of the syslog server and 514 is the port specified to use. We save this with ‘:wq!' and Enter.
‘sudo systemctl restart rsyslog' allows the rsyslog daemon to grab the new settings we just edited.
Test Syslog Messages 05:06-06:01
Now this is the fun part. We will actually get to see a message go over to our syslog server from our CentOS client server. To write a test message, we type ‘logger this is a test message' and press Enter. Let's use ‘sudo tail -f /var/log/messages' to see the message locally. Now that we see the local message, let's go over to our syslog server with ‘cd /var/log/centos-server1/'. I'm going to do a quick ‘ll' to list what log files have already been written. Since we did a logger command under the TestOut user, we're going to tail that log file to look for our newly created test message with ‘tail -f testout.log'. As you can see, it's the exact same message that was on our CentOS client server.
Summary 06:01-06:27
That's it for this demo. In this demonstration, we talked about how to configure a log host with Linux on a network. We first looked at the steps you need to complete in order to configure the system that's going to function as the log host to receive logging messages from another system. Then we
looked at the log client and configured it to send its log messages not only to its own log files, but also to send a copy to our log host.
9.2.9 Logging Events on pfSense
Click one of the buttons to take you to that part of the video.
Logging Events on pfSense 00:00-00:31
In this demo, we're going to spend a few minutes viewing log files on our pfSense security appliance. A log should act as a red flag that something is happening—potentially something bad. By reviewing your logs on a regular basis, you'll get an idea of the normal traffic on your device. In reality, no one likes to spend hours a day viewing log files. Typically, you would want to configure a syslog server so that all your logs from all devices go to one place to be consolidated for easier analysis.
System Logs 00:31-01:24
To view logs on pfSense, we first need to go to Status and then down to System logs. Once we're in System logs, we see the General tab. Like I said, viewing logs isn't the most exciting thing to do, but it's necessary. Under the General tab we can see that we have a time stamp, a process, a PID or process ID, and a message about the log. Let's move on to Gateways.
Under Gateways, you can see that I only have one Gateway on this test system. Looks like my system is grabbing an IP for the WAN network interface. You might be thinking that if it's grabbing an IP from a WAN, shouldn't this be a public IP? That answer is yes, but I do have a test network set up and that network is connected to my regular LAN.
We also have our Routing logs here. Next to that I have my DNS Resolver logs. Finally, we have our
Wireless logs. I don't have any Wi-Fi currently configured with this device, so there are no logs for our Wi-Fi.
Firewall Logs 01:24-03:06
I'll move over to my Firewall logs. This is generally where you might look for malicious attacks directed toward your network. You'll see information down here with more details. As I scroll down, you can see the different source IPs that triggered the log. This one here, 172.16.1.100, is from my DMZ trying to get out to the WAN.
Dynamic View shows us a bit less detail. Down here I have some WAN traffic on port 5353. As you first get a system set up, you might want to familiarize yourself with the different ports that your firewall is logging. Some ports might be perfectly normal while others might not be. I wasn't familiar with port 5353, but a quick web search told me it's for multicast DNS and is safe. So now I know what it is.
Summary View gives us a bunch of graphs that can be helpful to get a quick visual of things. Here I have my different interfaces. I have three in this device—one for my LAN, one for the DMZ, and one connected for the WAN.
I have my protocols and it looks like most of my traffic is UDP.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Down here a little further, I can see the source IPs of what's been triggering the logs. All the 192.168
addresses are from the WAN interface and the 172.16 address is my DMZ.
I have the destination IPs in this next graph. I have my source ports next. Finally, I have destination ports. Here you can see UDP/53 listed. That is my DNS traffic.
So, it's good to get familiar with what your typical traffic looks like. This is called creating a baseline. This isn't covered in this demo, but it should be one of the first things you do when setting up new devices.
DHCP Logs 03:06-03:31
My next set of logs are DHCP logs. Here you can see who's getting IPs from the DHCP server. Not only can you see who's getting IPs, but you can see the DHCP acknowledgement, the DHCP requests, and down here you can see that DHCP renewed an IP for one of the clients. All of this is useful information if you're troubleshooting DHCP or need to see what devices are connecting to your network.
Captive Portal Authorization 03:31-03:52
I had this device set up as a captive portal at one time so that we could see all the events that are related to it. As a quick review, a captive portal is a web page you're taken to, such as in a hotel or other public place, before you're given access to the internet. You typically must agree to the terms and conditions before being allowed to proceed. You might have to enter a password as well.
Other Logs 03:52-04:12
The next four tabs—IPSec, PPP, VPN, and Load Balancer—have no log files because I don't have any of those services running on this device. But I do have OpenVPN configured. Down here you can see all of those log files. I do have some NTP logs, or Network Time Protocol logs, here.
Log Settings 04:12-05:40
The last thing I want to look at is log settings. If you noticed as we were viewing logs, there were only fifty shown. Here is where we can change that if needs be. As I scroll down, you can see other settings that you can configure. You can even reset your logs.
Here at the bottom is where you can configure pfSense to send these logs to a syslog server. I'll check this box to enable it. When I do, I'm presented with some more settings specific to remote logging.
I can be specific about which interface I want to log. I can log all of them, or just one. I could just log my WAN events if that's all I'm concerned about.
I can choose to only have IPv4 logs or only have IPv6 logs sent. Here I would put the IP and port of the remote syslog server that's configured to receive those logs. I don't have a server set up for this demo, but if I did, this is where I would tell pfSense to send the logs to. The format would be something like 10.10.10.100, and it would be port 514. Port 514 is the port that pfSense uses by default to send logs to the syslog server.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Now I would need to tell pfSense what I want to send. I don't like information overload. I only want to send the logs that I actually need. So I might only want firewall events, DNS events, DHCP events, and VPN. In my case, that's OpenVPN. I would then click on Save and, if my syslog server is configured, it'll start to receive log files from my pfSense security appliance.
Summary 05:40-05:57
That's it for this demo. In this demo, we examined the logs on our pfSense security appliance. We viewed many of the different logs that can be collected. We then looked at settings and some things that we can configure. We ended by explaining how to send our logs to a syslog server.
9.2.10 Monitoring Data and Metadata Facts
This lesson covers the following topics:
Log data
Metadata
Data analyzers
Log Data
Log data is a critical resource for investigating security incidents. As well as the log format, you must also consider the range of sources for log files and know how to determine the type of log file that best supports any given investigation scenario.
Event data is generated by processes running on network appliances and general computing hosts. The process typically writes its event data to a specific log file or database. Each event is comprised of message data and metadata:
Event message data is the specific notification or alert the process raises, such as "Login failure" or "Firewall rule dropped traffic."
Event metadata is the source and time of the event. The source might include a host or network address, a process name, and categorization/priority fields.
Accurate logging requires synchronization of each host to the same date, time value, and format. Ideally, each host should also be configured to use the same time zone or a
"neutral" zone, such as universal coordinated time (UTC).
Windows hosts and applications can use Event Viewer format logging. Each event has a header reporting the source, level, user, timestamp, category, keywords, and hostname.
Syslog provides an open format, protocol, and server software for logging event messages. It is used by a vast range of host types. For example, syslog messages can be generated by switches, routers, firewalls, and UNIX or Linux servers and workstations.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
A syslog message comprises a PRI code, a header, and a message part:
The PRI code is calculated from the facility and severity level.
The header contains a timestamp, hostname, app name, process ID, and message ID fields.
The message part contains a tag showing the source process plus content. The format of the content is application-dependent. It might use space- or comma-delimited fields or
name/value pairs.
Metadata
File metadata is stored as attributes. The file system tracks when a file was created, accessed, and modified. A file might be assigned a security attribute, marking it as read-
only or a hidden or system file. The ACL attached to a file showing its permissions represents another attribute. Finally, the file may have extended attributes recording an author, copyright information, or tags for indexing/searching.
As metadata is uploaded to social media sites, they can reveal more information than the uploader intended. Metadata can include current location and time, which is added to media such as photos and videos.
Metadata is produced by almost all network activity. Server requests, applications, and email are examples of where metadata can be found. In the context of bandwidth monitors, metadata is used to investigate security-related concerns or incidents. The following table describes three types of metadata.
Type
Description
Email metadata
Email provides metadata that is used to trace email. All emails come with a header containing information about the sender and recipient. Parts of the headers can be spoofed, giving investigators false information. However, there are security devices that put X-headers throughout an email's header. These provide the originating email account and IP address, not the spoofed one.
Mobile metadata
Tablets, laptops, smartphones, smartwatches, and any other device that connects to the internet and can be moved around produces mobile metadata. These devices send emails, text messages, and use apps. All these produce metadata that can be used to identify people, places, times, and even deleted data. Pictures can be timestamped and geolocation stamped. Much of this metadata also reveals the origination of the data and the sender.
Web metadata
Websites produce many types of metadata. The metadata on a user's machine versus the server can differ greatly. The data on both sides of the transmission can help fill in gaps and corroborate findings. Metadata includes IP addresses, user requests, downloads, time spent on the site, and attempts to gain unauthorized access. Web metadata includes cookies, browser history, and cached pages. Many times, malicious actors will
attempt to obfuscate their metadata. However, there are ways of finding the actual metadata, especially for trained forensic investigators.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Data Analyzers
Network admins should always look for a way to examine what is happening inside the network. There are several tools to help sift through the tremendous amounts of data generated by network activity. The following table describes some of these tools.
Tool
Description
NetFlow NetFlow is a feature on Cisco routers. It works at layers 2 – 4. It can examine each data flow that comes through the network or be set to sample sessions at specific intervals.
sFlow
sFlow is a packet sampling technology that works on layers 2 – 7. Unlike NetFlow, sFlow can only be used in sampling mode. This is a stateless packet sampling that provides information on various layers and does it quickly
and efficiently.
IPfix
IPfix directly integrates data that usually goes to Syslog or SNMP. This eliminates additional services collecting data from each network device. IPfix has provisions for variable length fields, meaning there are no ID number restrictions. IPfix addresses the need for a standardized protocol for internal protocol flows. This data comes from routers, servers, and other network appliances that are mediation systems. The data is formatted, sent to an exporter, and then sent to a collector. IPfix, like NetFlow, looks at flow and the number of packets sent and received during a given session.
9.3.1 Forensic Documentation and Evidence
Click one of the buttons to take you to that part of the video.
Forensic Documentation and Evidence 00:00-00:29
Digital forensic analysis is the art of revealing important information from computer systems and networks. Every piece of evidence can hold key information. From deleted files to timestamps, user activity, and unauthorized traffic, gathering and examining evidence from computer systems and networks can reveal critical insights. But how do we ensure the validity and integrity of this evidence?
Documentation 00:29-01:24
Documentation plays a crucial role in collecting, preserving, and presenting valid digital proofs. Failure to maintain a thorough record can jeopardize the integrity of the evidence. It's important to understand that there are various processes and tools used to acquire different types of digital evidence from computer hosts and networks. By following strict protocols, these processes not only demonstrate how evidence is acquired but also establish that it's an accurate representation
of the system at the time of the event.
Just like DNA or fingerprints, digital evidence is latent, meaning it can't be seen with the naked eye. It requires specialized machines and processes to interpret. Therefore, formal steps must be taken to ensure the admissibility of this evidence in court. Proper documentation is crucial in showing how the evidence was collected and analyzed without any tampering or bias.
Due Process 01:24-01:50
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
This is where due process comes into play. Just as the law requires fairness in criminal convictions, forensic investigations must adhere to procedural safeguards to ensure fairness. Anyone involved in the investigation, from technicians to managers, must be aware of these processes to avoid compromising the integrity of the investigation. Defense lawyers will always look for any doubts or mistakes in the evidence or collection process.
Legal Hold 01:50-02:19
When information becomes central to a court case, it's essential to preserve it. This concept, known as legal hold, entails suspending routine deletion of electronic or paper records and logs. Regulatory
requirements, industry standards, or litigation notices from law enforcement or lawyers can trigger legal holds, leading to the seizure of computer systems as evidence. When information is subject to legal hold, computer systems may be taken as evidence, disrupting the network.
Chain of Custody 02:19-03:11
To maintain the integrity of the evidence, a chain of custody is established. Host devices and media from the crime scene are carefully labeled, bagged, and sealed using tamper-evident bags. Anti-
static shielding is used to protect electronic media from damage. Each piece of evidence is documented on a chain of custody form, recording who collected it, who handled it afterward, what measures were taken for backup and analysis, and where it was stored. This ensures the evidence remains intact and untampered with. Finally, the collected evidence is stored in a secure facility. This
not only includes access control but also environmental control to protect the electronic systems from damage.
By following these key elements—proper documentation, due process, legal hold, and chain of custody—digital forensic analysts can ensure the validity and credibility of the evidence they gather.
Summary 03:11-03:27
That's it for this lesson. In this lesson, we talked about forensic documentation and evidence. We reviewed the importance of documentation and due process. We discussed legal holds and the importance of the chain of custody for ensuring the integrity of evidence.
9.3.2 Forensic Acquisition of Data
Click one of the buttons to take you to that part of the video.
Forensic Acquisition of Data 00:00-00:15
A forensic investigator gathers evidence from many sources. In this lesson, we will discuss software and hardware sources of potential evidence. We will also discuss the order in which the evidence needs to be gathered.
Order of Volatility 00:15-00:59
The process of capturing volatile data follows the order of volatility. We need to preserve the most volatile data first and work our way down to data that is more persistent. If an attack is underway,
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
your computer forensics response team is probably going to capture volatile data before trying to gather data from the hard disk or turning the system off.
The order of volatility is: RAM, Swap Files/PageFiles, Hard Drive, Remote Logs, Archived Data. These items must be investigated in this order so that potential evidence is not lost.
Each one of these items has its own challenges and potential dangers when acquiring forensic data. Let's look at tools and approaches to acquire and retain data from these items.
Disk 00:59-01:34
In a computer forensics investigation, the data on the hard disk drive is a key piece of evidence. A lot
of the things that we do on a computer system are saved in some way on the hard disk drive, including virtual memory. A wealth of data is there.
In addition, information attached to deleted files may still be on the disk. Therefore, the hard drive itself is a goldmine of evidence for a prosecutor. Caution must be exercised when making a copy of the hard drive. A regular file-for-file copy is not good enough. It needs to be a sector-by-sector copy that includes data formerly deleted but still accessible.
Random Access Memory 01:34-02:00
RAM is the most volatile of all computer data storage. RAM is cleared when a computer is shutdown. Once gone, it cannot be recovered. Data in RAM can be copied as long as the system is running, but should be done only by someone with proper training. The data stored in RAM can have
valuable information. Many times malware like worms, viruses and trojan horses are created as memory-resident only. This makes catching them difficult.
Swap/Page File 02:00-02:27
Swap files, or page files, are a virtual extension of RAM. An OS is designed so that if you are running low on RAM it can place not-in-use information into the swap or page file on the hard drive to be used later. An admin can determine how much space to allocate to page files. For the forensic investigator, this is another potential source of evidence. The page file data does not automatically delete at shutdown unless you change the default settings.
OS 02:27-02:57
The OS is a virtual treasure trove of evidence. File systems like NTFS and Ext4 are the roadmap to where data is stored. There is also a registry which is a database that stores information about all the applications installed on a computer. Registry keys are folder-like objects that store application-specific
information for each application stored in the registry. The recycle bin is where deleted files are placed. Also the print spooler can be examined to see the print history from a computer.
Device 02:57-03:25
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In the last 30 years technology devices have changed dramatically. With the increase of digital devices there is an equal increase in devices that may require forensic examination. These devices contain all the evidence a forensic investigator needs. Each device, such as smart phone, tablet, laptop and smart watch all share common elements, namely RAM, CPU, logs and storage space. A trained investigator must be familiar with all platforms.
Firmware 03:25-03:51
Firmware is the basic level of software that controls some hardware. Firmware is stored in read-only,
persistent memory. This non-volatile memory does not self-delete and in many cases is not updated. Firmware has become vulnerable to different attacks like rootkits and even steganography. The hard drive has firmware that can be manipulated by malicious actors. Extracting
this data is difficult and requires specialized training and expensive tools.
Snapshot 03:51-04:13
A snapshot can be a useful tool in capturing the exact state of running systems. Snapshots capture volatile data in an ever-changing environment. This tool is not a substitute for the sector-by-sector copy of a disk. The snapshots can be saved for future examination. Snapshots are valuable only if taken of a running system that is currently experiencing an attack.
Cache 04:13-04:38
A cache is stored data that is used to improve the speed of the computing process. There are different caches in a computer system. Address resolution protocol, or ARP, matches IP with MAC addresses. This protocol has a built-in caching component that saves CPU time and network bandwidth by referring to the cache. Internet browser history is also cached and can provide valuable evidence regarding search history.
Network 04:38-04:59
Network forensic data plays an important role in an investigation. Network appliances that are important in an investigation are firewall, router, switches, domain controllers, DHCP servers, application servers and web proxy servers. There are also applications on the network including intrusion detection and intrusion protection systems.
Artifacts 04:59-05:19
In computer forensics an artifact is an object that contains forensic value. Forensic value is evidence
found on hardware or in areas like registry hives that show indicators of compromise. These indicators are a digital sign of a breach. It is important to preserve these artifacts and understand their value to a forensic investigation.
Summary 05:19-05:39
That's it for this lesson. In this lesson, we learned the procedure used to prioritize evidence gathering
called the order of volatility. We discussed different areas within a network that contain potential evidence such as cache, network appliance, OS, disks and RAM. We also covered artifacts and how they are defined and identified.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.3.3 Forensic Tools
Click one of the buttons to take you to that part of the video.
Forensic Tools 00:00-00:12
In this lesson, we're going to discuss some of the forensic tools on the market. Some of the tools are
Linux-specific, and other others are Windows-centric.
Forensics Tools 00:12-00:31
Investigating and gathering evidence for court action is part of handling computer security. These computer forensic investigations require specialized tools to gather evidence without making changes to the devices or their stored data. These tools include imaging software, data extraction tools, and advanced search capabilities.
Data Gathering Tools 00:31-01:30
Let's take a look at three of these tools and how they work. We're going to talk about dd-CLI, Memdump, and WinHex.
The first tool, dd, is an extraction tool. It's one of the oldest forensic tools still in use. It's used in Linux and Unix systems to create bit-by-bit copies of a physical hard drive without mounting the drive
beforehand. It uses the dd-CLI command, and it's easy to employ. A raw image is created for forensic analysis, and the user has multiple file extension choices.
Memdump is another data gathering tool. It aggregates the data in the volatile RAM memory from devices. Normally, a second tool is needed to search through all the data in a memdump file.
WinHex is a powerful disk and universal hexadecimal editor. It's used in digital forensics for data recovery. WinHex is Windows-compatible and can be used with a whole bunch of options, making it a very versatile forensic tool.
Imaging and Investigation 01:30-01:58
Now let's move on to imaging tools. Imaging tools must create exact copies of data without making a
single change—not even one bit. FTK Imager is a powerful tool that allows the investigator to acquire, preview, and copy data thoroughly and forensically. Autopsy is exactly what it sounds like—
a forensic tool that investigators use to thoroughly examine extracted data, pictures, and empty disk space in order to determine exactly what happened.
Summary 01:58-02:22
And that's it for this lesson. We talked about Forensics tools and how they fall within a few categories; data gathering, imaging, and extraction. We discussed what forensic tools must be able to do--create an exact image of a disk with perfect integrity and then search large amounts of data for specific file extensions, hidden files, deleted data, and root cause analysis.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.3.4 Create a Forensic Drive Image with FTK Click one of the buttons to take you to that part of the video.
Create a Forensic Drive Image with FTK 00:00-00:26
In this demonstration, we're going to discuss creating a forensic disk image with the Forensic Toolkit. It's better known as the FTK Imager. In our scenario, we need to examine a hard drive from a malicious employee's workstation. Before you try this for real, make sure that your organization considers you qualified to do this and that you have the backing of your legal department.
Use a Write Blocker 00:26-02:24
The first thing we need to do is create an image of the employee's hard drive. Like I said, we're going to use the FTK Imager to do this. Remember, when we're conducting a forensic investigation, we don't want to modify the evidence in any way, shape, or form. If this situation were to be litigated for some reason—"say, the employee gets fired and then sues the company--then the prosecutor could say that when we conducted our forensic investigation, we planted the evidence on the hard drive. That's why it's crucial to make sure that that hard drive isn't changed in any way during the forensics investigation.
We're going to create an image of that hard drive, and we're going to do all of our testing, examining, and investigative work on the imaged copy of the hard drive, not on the actual hard drive. Once we've created that image, we'll use a second tool to analyze the contents of the hard drive to see if we can find anything that's questionable.
The first thing we need to do is get an image of the hard drive. To do this, we have to connect the hard drive to this forensic workstation. That's problematic because as soon as you connect a hard drive to a Windows workstation, Windows immediately starts writing little bits of data to the drive
—"but we don't want to modify the drive in any way.
So we can't just directly connect the drive to a SATA connector on this workstation. Instead, we need
to implement a write blocker, also known as a forensic disk controller. Its job is to block writing to the
hard drive. It allows us to connect the employee's hard drive to the write blocker, and then we connect the write blocker to our machine, usually with a USB cable. This prevents any write operations coming from the operating system on the forensic computer from going through to the device that we're analyzing. So remember, when you're conducting a forensic investigation on a hard drive and you're going create an image, always use a write blocker to prevent any type of write operation from occurring on that drive. I've already set this system up with a write blocker, and it's ready to go.
Create a Forensic Image 02:24-05:53
Let's go ahead and use the FTK Imager to create an image that we can analyze. By the way, FTK Imager is a free tool. A lot of the forensic software is very, very expensive, but you don't have to spend a lot of money to be able to conduct a good forensic investigation. There are a lot of free, open source, and legitimate tools for forensic investigations.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Within Access Manager, let's click on Add Evidence Item. Then we have to specify what we're going to add. We're adding a physical drive because I've connected it to a write blocker on this particular system. Click Next.
We have to specify which drive we want to create the image from. You'll notice, when I click the dropdown list, it picked all of the hard drives on this system. You need to be very careful that you don't choose the wrong one. For example, if I were to choose Physical Drive 0, that's my local workstation hard disk drive. We'd be creating an image of my local system, not the actual drive that's
being used for evidence. Physical Drive 1 is the drive that I've connected to the write blocker. That's what we want to choose. I'll click Finish. We'll come over here, to the Physical Drive 1, and we'll right-click on it. We want to Export Disk Image.
Under Image Destination, we'll click Add. Then we can specify which type of destination image we want to create. Raw (dd) is the default selection because it's probably one of the most widely used imaging format for forensics, so we'll just leave it set the way it is. Click Next.
We need to document the evidence. Remember, whenever you're conducting a forensic investigation, you need to document everything. In fact, you should take a picture of the entire setup that you're using to analyze this hard disk drive. You should take a picture of the drive, take a picture of how it's connected to the write blocker, and take another picture of how the write blocker is connected to the computer. You might even want to take a video of the entire process.
Under Evidence Item Information, you'll want to assign a case number. Let's do —˜1 2 3 4' and evidence number —˜5 6 7 8'. Then we'll give it a unique description. We'll enter —˜HD from Mary Worley.' I'll put —˜Dana Fellows' down as the examiner. Click Next.
We have to specify where we want to store the image file on my computer. I have a folder for my forensic images. Let's create a new folder, —˜1 2 3 4 Mary Worley'. I'll put in the same thing here for Image Filename, —˜1 2 3 4 Mary Worley'. If we want to, we can fragment the image--that is, break the image file into multiple pieces. This is a small hard drive. I'm going to set that to zero, which basically means we'll have one image file for the entire hard disk. Click Finish.
We want to make sure that Verify the images after they are created is checked. It's also a good idea to create a directory listing of all the files and the image after they're created, so check that box too. It can be useful for when you're searching for information. Click Start. At this point, the imaging process has started. It'll take a little bit of time to complete, especially if you're going to be working with a big hard disk drive. I'm going to pause the recording now and come back when it's done.
Okay. As you can see, it's almost done verifying the image. In just a few seconds, the process will be
complete.
MD5 Hash and SHA1 Hash 05:53-06:09
Notice that it's created two different hashes, an MD5 Hash and a SHA1 Hash. For both of these, the hashes match. The MD5 Hash matched, and the SHA1 Hash matched as well. That's good. That's exactly what we want to see. I'll go ahead and hit Close.
Verify Image File 06:09-07:01
Before we end this demo, let's verify that the image file has been created. I'll open File Explorer and navigate to the folder where we saved the image. Here are the various pieces of information
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
that were pulled from the hard drive to create it. We have the image file itself. There's a CSV document that contains a listing of all the filenames and directory names. Last on the list, we have a document that provides a nice summary of the image creation process. It gives us the case number, our evidence number, the description, the examiner, all the information we filled out earlier, and the information about the hard drive itself from Mary Worley's computer. At this point, our image file is created. The next step of the process is to use another tool, such as Autopsy, to examine the image.
Summary 07:01-07:23
That's all for this demo. We used FTK imager to capture a forensic image of a hard drive. We discussed the importance of using a write blocker to keep data from getting tampered with and the importance of examining a copy of the disk, not the original disk itself. Then we made a copy the image and verified that it was saved to the folder we created.
9.3.5 Create a Forensic Drive Image with Guymager
Click one of the buttons to take you to that part of the video.
Create a Forensic Drive Image with Guymager 00:00-00:35
There are several ways to capture a disk image as part of a forensic investigation. In this demo, we're going to do this with a program called Guymager. Guymager has a graphical user interface, making it a bit easier to use than a command line tool. We're on the Guymager home page, and you can read more about it there. One thing I want to point out on the website is that Guymager does come on several live CDs and security operating systems. We're going to use Kali Linux for our demo, so let's close the browser and get started.
Write Blocker and Linux Version 00:35-01:11
I want to mention a couple of things before we get started. Normally, you'll want to have a write blocker between the disk you're imaging and the forensic workstation that you're working from. In a virtual environment, such as my test system here, I can't really do that. The write blocker would keep
data from being written to the disk we're wanting to image, which is very important.
The other thing I want to point out is that Kali has a live CD version that has a lot of forensic tools that my copy doesn't have. Keep that in mind if you're setting up your own lab and be sure to investigate the right copy of Linux that will work best for you.
Launch Guymager 01:11-02:11
I said that Guymager is a GUI tool, but it does need to be run as a Sudo user. There's a shortcut for it under the Application Launcher, so if I go up here, down to Forensics, and then over to Forensic Imaging Tools, I see Guymager. When it launches, it warns me that it needs to be started with root rights in order to perform acquisitions. That's no fun, and but we can get around it. I'll launch it from the terminal. So let's click No, I don't want to continue here.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
I'll go up here and launch a terminal window. After it loads, I'll type —˜sudo guymager' and press Enter. It prompts me for a password to continue as an admin, so I'll type that in here. Just a reminder, when you type passwords in Linux, the cursor doesn't move, and you think that you're not really typing. This is normal; it's a security feature. Your keyboard isn't broken. I press Enter, Guymager is launched, and now I can acquire images.
Acquire an Image 02:11-03:57
I'll make this full screen to take advantage of all the space. Right away, I notice I have three disks, or
partitions. This first one is my hard drive for my Kali machine. I know that because it's a 20-gig disk, and that's what I used. This second one is what I'm after. I know this is it because it's a 2-gig disk, which I plugged in. To acquire an image, I simply right-click and choose Acquire Image from the menu.
Now we have a few choices for the file format. This one is called Expert Witness Format. When we select it, we have the option to fill out all this additional data that would be needed if this was part of a legal investigation. We're not going to pick that one. We're going to pick the Linux dd raw image. This is the format I'm going to use when it's time to examine the image content. Over here, we can split the file into smaller pieces. I have a smaller drive, so I could uncheck that box, but I'll just leave it as-is.
Now I need to put the images somewhere. I have a temp folder for these images, and I'll navigate to /home/dana/temp and click Choose to select that directory. I need to supply a filename, so I'll just enter —˜Image1'. The program puts in the extension, so we don't have to do that. Below here, I'm going to make sure this box is checked so that we have the hash value of the image Calculate MD5. We're also going to verify the image after acquisition. Click Start to get the process going.
We have a very small drive, so this shouldn't take long. In fact, if you look at the progress, you can see we're moving along very nicely. It looks like it's finished, and the indicator light is green. We're done with this part, so let's verify that it acquired the images and saved. I'll minimize Guymager for now.
Verify Image Creation 03:57-04:28
Now let's go up to our Application Launcher and open up File Manager. Remember, we saved the image in a temp directory under /home/dana, and here it is. I'll slide the mouse over and double-
click on temp to open it and view the contents. I can see three files. It does look like Guymager broke my file into two, since I checked the box to split files over 2 gigs. The third file is a log file. Let's
open up that log and take a look.
Log File 04:28-05:18
My log file has some info about Guymager itself--the version, timestamp, and so on. I'll scroll down here a little ways. Now I see some information about disk size and other data. I want to go down a little farther, to here. This is more relevant information about the acquisition of the disk. I have the device name, /dev/sda, and the size, format, and so on. Down here, I can see the MD5 hash calculation.
Finally, here, we can see when the image was captured, how long it took, and the speed. At the very
bottom, we can see the three files that were created when we ran the program. At this point, our disk
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
is captured. The next step is to examine the contents with another forensic software tool designed to
do so.
Summary 05:18-05:24
That's it for this demo. In this demo, we used Guymager to capture a drive.
9.3.6 Create a Forensic Drive Image with DC3DD Click one of the buttons to take you to that part of the video.
Create Forensic Disk Image with DC3DD 00:00-01:32
In this demonstration, we're going to create a forensic drive image. It's very important that you understand you can't use standard file copying utilities to create a forensic drive image. You can't use, say, Windows Explorer or File Explorer. Those utilities copy files that have an entry, a record, in
the allocation table of the partition where it resides, so it only copies data that's associated with a file or folder in the file system. When you're conducting a forensic investigation, you need all the data on
that hard drive, especially the data that's not associated with a particular file or file in the file system, but is still on the hard disk drive. Basically, we're looking for stuff that's been deleted—"stuff that someone may be trying to hide. We need to use a drive imaging utility to do this. There are a variety of utilities that you can use. Some cost a lot of money; some cost practically nothing. We're going to use the latter option in this demonstration today.
In this demo, we'll use dc3dd to obtain a raw image of a hard drive. dc3dd was developed at the Department of Defense's Cyber Crime Center, and it's basically an enhanced version of the open source dd command with added features for computer forensics. One of the main characteristics of dc3dd is that it offers the possibility of hashing on the fly with multiple algorithms (MD5, SHA-1, SHA-256, or SHA-512). You'll want to use a write blocker between your machine and the disk you're obtaining an image from.
Linux Storage Devices 01:32-02:20
Before we start looking at how to create the drive image, you need to understand how Linux storage devices are addressed by the Linux system. It's kind of difficult to understand when you're new to Linux, but all storage devices on Linux systems are addressed using a device file located in the /dev directory. If a process needs to write information to a hard disk drive, it writes it to a specific file in the /dev directory, which then redirects the IO data to the appropriate hardware device, such as a hard disk drive.
What we need to do is figure out what the device name is for the drive that we want to image. There are a variety of command line utilities you can use to do this. One way to find the forensic image is to use fdisk and the sudo fdisk 1 parameter.
Use fdisk to Identify Disks 02:20-03:32
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
I think we have enough information to get started. We could go to a terminal and start dc3dd, but I'm going to go up to the Application Launcher and start it from there. I'll come down to Forensics and then select Forensic Imaging Tools. I see dc3dd, so let's click on it and launch it.
The only reason I wanted to start it this way is so that you can see we have a nice manual here to learn more about how to use it. I'm not actually going to go over any of this in this information right here, but I will when we type some of the commands in a minute. I'm going to type —˜clear' and get a clean screen to work with.
I like to see what disks I have to work with by typing —˜sudo fdisk' with the —˜-l' parameter. Press Enter. Now put in the sudo password. The first disk we'll look at is this one, /dev/sdb. It's a 20-gig disk, and I know that this is the one with my Kali Linux installed on. But this isn't the disk I want to image.
If I go up a little, I see a second disk, /dev/sda. This one is 2 gigs. This is the one I want to image. Make sure you know which disk you're working with when you do this. I'll come down here and type —˜clear' to clear the screen.
Using the dc3dd Command to Create a Forensic Drive Image 03:32-05:22
Now we need to type in the command to create the image, tell it where to find the disk, where to store the copy, which hash to use, and name the log file. I'll go ahead and type that in and then come back in a second and explain the command.
Okay, I have the command typed in. The first part is —˜sudo dc3dd'. This just tells Linux to run our command as root, or basically like an admin in Windows. The next part, —˜if=/dev/sda', is my input file. That's what the letters I-F stand for, input file. After that, we need to specify where our image will
go. O-F stands for output file, so we've typed in —˜of=/home/dana/temp/imaged2.img'. That's the path where I'll save my image file, and I named it —˜image2.img'. Next, we'll use —˜hash=sha256' for our hash type. We could use MD5 or one of the others types we listed in the intro of this demo, but this will work fine. The last thing we have is —˜log=/home/dana/temp/image2.log'. As you can guess, this is just a log file, which is very important to have along with the disk image. It's going to be
located in the same directory as our disk image. We'll look at after we create it.
This all looks really good. I'll press Enter to start the disk imaging. Down here, we can see the copy progress. Keep in mind that this is a very small disk without a lot of data, so it's going quickly. It looks
like it's done. Down here, I can see when it completed. Up here is the location of the image file. I'll go
up and close out of our terminal. Now let's go look in our temp file, confirm that it copied over, and look at our log file.
Verify Results 05:22-05:50
Next, we'll go up to Application Launcher and over to File Manager. I'll go over to my temp folder and
double-click on it to open it. Here are my two files, my image file and my log file. Let's open that log now. It shows us things like the time the image started, the size, where the file was located, our hash
value, the output filename, and the time it completed. Our next step would be to use another forensic
tool to examine the image itself.
Summary 05:50-06:00
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
And that's it for this demo. We used the command line utility dc3dd to capture a forensic image from a disk.
9.3.7 Examine a Forensic Drive Image with Autopsy Click one of the buttons to take you to that part of the video.
Examine a Forensic Drive Image with Autopsy 00:04-00:34
After a computer forensic investigator captures an image of a drive, they need to examine it. Autospy
is a popular examination tool. It's digital forensic software that's free, open source, and said to have most of the features that you'd find in commercial digital forensics tools. Some of the features include, hash matching, registry analysis, and web analytics, and the ability to do a keyword search. For our demo, we're going to use Autopsy to analyze a disk that has been previously captured and saved to my hard drive.
Add Data Source 00:34-02:43
Double-click on it to start it. Sometimes it takes a minute or so for it to fully launch. We want to create a new case. Click on that option. We'll give it a case name, —˜1 2 3 4 Mary Worley'. Let's specify the directory where we're going to store the information we're about to create. Let's put it in the same directory as our image file. We'll keep all the data together. I'll actually create a folder for it and name it —˜Autopsy'. I'll select the folder we just created and then click Next. Enter in a case number, —˜1 2 3 4 Mary Worley'. I'll put the examiner's name in here. I'll hit Finish.
This next part is going to take a minute or two while it creates the case and gathers all the files. I'll pause the demo while this is running.
Okay, that took several minutes, and now I'm presented with the Add Data Source page. We have to
specify what we want to analyze. We're going to analyze a disk image or VM file. Click Next. We need to specify what image file we want to look at. It's in a folder called Forensic Images under 1 2 3
4 Mary Worley. There's the image file that we created a minute ago. Hit Open. We need to set our time zone. I'm in Mountain Time, so I'll scroll down until I find that and select it. Click Next.
We have to specify exactly what we want to look for. This tool uses what's called Ingest Modules. An
ingest module is basically just a piece of software that looks for a particular type of information in the
image file. You can come over here and specify what you want to look for. For example, you could go under Keyword Search and specify what type of information you want that particular module to look for. We want to look for all of this information here, so I'll check all the boxes and click Next. Now let's go ahead and click Finish.
The process of analyzing the image file has started. It can take quite some time to complete, depending on the size of disk and amount of data. In my test environment, I don't have a very large disk or very much data, so it should happen pretty quickly. But I'll go ahead and pause the
demo while it runs.
Identify Suspect Content 02:43-05:43
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
At this point, the image file has been thoroughly examined by the Autopsy tool, and the results are displayed here on the left, organized by the type of data or the view of the data that you want to use. It's very useful. For example, it sorts the data by the file type extension.
Let's click on Images. There are 43 of them. Let's see what we have. If I click this one, WACC classroom.jpg, I can see the image down here, in a preview pane. It looks like a picture of a computer lab or classroom.
See this one up here, with the red X? This is a file that was deleted on the hard drive. Of course, as you probably know, when you delete something, you don't actually delete it at all--you just delete the pointer, or reference to it. The file won't go away until the drive is formatted or overwritten. Even then, files are sometimes recoverable. Down in the preview, this looks like a screen shot of something.
I'll click on Videos. Over in the listing, I have four videos. One's been deleted. When I click on Audio, my list is the same as it was for videos, so Autopsy must recognize that there are audio files as part of the videos. Under Documents, I have some Microsoft Office documents. I could export those out and look at the contents in the supported programs if I wanted to. I have a few PDF documents as well, along with some plain text files. Once again, a few of these have been deleted, as you can see by the red X. I have no executable files to look at, so we'll skip those. Autopsy does give us a list of just the deleted files, all grouped together. That is handy if I was focused just on what Mary may have deleted and might be trying to hide.
I have a few more categories. We have Recycle Bin and something called User Content Suspected. I'll make my way down to the email addresses. Out of concern for confidentiality, I'm not going to open this up because I'm not sure exactly what's on this disk. Just to clarify, when we see things like emails addresses, IP addresses, URLs, and so on, it does not mean email accounts configured on the system, but any email that shows up on the disk in a document, spreadsheet, etc.
Under IP addresses, I see 0.1.2.3. This might be a simple false positive because the format is somewhat like an IP address. Under phone numbers, I have some phony-looking phone numbers. This looks like something from TV, since they have a 555 prefix. I have a bunch of URLs listed. This is handy for seeing where the person has been spending their time on the web.
I have Hashset Hits, but there's nothing there to see. Right under this, I actually have Email Messages, but there aren't any to look at on this disk. I have Credit Cards down here. Once again, I'm not going to click on those and open them because I'm not sure what I might find. And now we're at the bottom of the list.
Additional Features 05:43-06:03
Now, there are entire courses and degree programs on how to use this product and the whole legal process that goes along with it. This demo is a very brief overview of what the software is capable of doing. We didn't even look at things such as geolocation, timeline, report generation, or other features—"there's a lot more to learn!
Summary 06:03-06:10
That's it for this demonstration. In this demo, we used Autopsy to examine a disk image.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.3.8 Forensic Data Integrity and Preservation Click one of the buttons to take you to that part of the video.
Forensic Data Integrity and Preservation 00:00-00:35
Data integrity and preservation are fundamental aspects of digital forensics, ensuring the reliability and usefulness of the data. They serve to maintain and protect the original state of digital evidence from the moment of collection to its presentation in court. Any intentional or unintentional alteration can compromise the accuracy of the evidence, thereby undermining the entire forensic investigation. This means that evidence acquisition, imaging, and storage all need to be done using well-defined processes.
Order of Volatility 00:35-01:39
Evidence should be captured in the order of volatility, from more volatile to less volatile. The ISOC best practice guide to evidence collection and archiving recommends the following order: First, CPU registers and cache memory—including cache on disk controllers, graphics cards, etc. The contents of nonpersistent system memory, or RAM, including routing table, ARP cache, process table, and kernel statistics. Then, data is stored on persistent mass storage devices like HDDs, SSDs, and flash memory devices, followed by partition and file system blocks, slack space, and free space.
Next would be system memory caches, such as swap space, virtual memory, and hibernation files. And finally, user, application, and OS files and directories. Other sources could include remote logging and monitoring data, physical configuration and network topology, and archival media or printed documents.
Acquisition Integrity 01:39-02:58
Data acquisition is complicated by the fact that it's more difficult to capture evidence from a digital crime scene than it is from a physical one. For example, some evidence could be lost if a computer system is powered off; on the other hand, other evidence may be unobtainable until after the system has been powered off. Because of this, it's important to know the three states for persistent storage acquisition.
Live acquisition means copying the data while the host is still running. This may capture more evidence or more data for analysis and reduce the impact on overall services. Still, the data on the actual disks will have changed, so this method may not produce legally acceptable evidence. It may also alert the threat actor and allow time for them to perform anti-forensics.
With static acquisition, the host is shut down normally. If malware is a concern, this method runs the risk that the malware will detect the shutdown process and perform anti-forensics to try to remove traces of itself.
If this is the case, static acquisition by pulling the plug instead of powering down may be a solution. This means disconnecting the power at the wall socket (not the hardware power-off button). This is most likely to preserve the storage devices in a forensically clean state, but there's the risk of corrupting data.
Proof of Integrity 02:58-03:45
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Once the target disk has been safely attached to the forensics workstation, data acquisition proceeds as follows:
A cryptographic hash of the source disk media is made using either the MD5 or SHA hashing function. A bit-by-bit copy of the source media is made using an imaging utility. A second hash is then made of the image, which should match the original hash of the media. A copy is made of the reference image, validated again by the checksum. All analysis is performed on the copy.
This proof of integrity ensures non-repudiation. If the provenance of the evidence is certain, the threat actor identified by analysis of the evidence can't deny their actions. The hashes prove that no modification has been made to the image.
Summary 03:45-04:17
That's it for this lesson. In this lesson, we discussed data integrity and preservation. We reviewed the order of volatility that requires that evidence be captured in order of volatility from more volatile to
less volatile. We also talked about ensuring acquisition integrity by leveraging the different states of persistent storage acquisition: live acquisition, static acquisition, powered down, and static acquisition, unplugged. Lastly, we discussed how proof of evidence integrity helps to ensure non-
repudiation.
9.3.9 Forensic Investigation Facts
Digital forensic analysis involves examining evidence gathered from computer systems and networks to uncover relevant information, such as deleted files, timestamps, user activity, and unauthorized traffic. There are many processes and tools for acquiring different kinds of digital evidence from computer hosts and networks. These processes must demonstrate exactly how the evidence was acquired and that it is a true copy of the system state at the time of the event.
This lesson covers the following topics:
Due process and legal hold
Chain of custody
Reporting
Data sources
System memory acquisition
Data image acquisition
Due Process and Legal Hold
Digital forensics is the practice of collecting evidence from computer systems to a standard that will be accepted in a court of law. Forensics investigations are most likely to be launched to prosecute crimes arising from insider threats, notably fraud or misuse of equipment. Prosecuting external threat sources is often difficult, as the threat actor may well be in a different country or have taken effective steps to disguise their location and identity. Such prosecutions are normally initiated by law enforcement agencies,
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
where the threat is directed against military or governmental agencies or is linked to organized crime.
Due process is a term used in US and UK common law to require that people only be convicted of crimes following the fair application of the laws of the land. More generally, due process can be understood to mean having a set of procedural safeguards to ensure fairness. This principle is central to forensic investigation. If a forensic investigation is launched (or if one is a possibility), it is important that technicians and managers are aware of the processes that the investigation will use. It is vital that they can assist the investigator and that they do not do anything to compromise the investigation. In a trial, defense counsel will try to exploit any uncertainty or mistake regarding the integrity of evidence or the process of collecting it.
Legal hold refers to the fact that information that may be relevant to a court case must be preserved. Information subject to legal hold might be defined by regulators or industry best practice, or there may be a litigation notice from law enforcement or lawyers pursuing a civil action. This means that computer systems may be taken as evidence, with all the obvious disruption to a network that entails. A company subject to legal hold will usually have to suspend any routine deletion/destruction of electronic or paper records and logs.
Chain of Custody
The host devices and media taken from the crime scene should be labeled, bagged, and sealed, using tamper-evident bags. It is also appropriate to ensure that the bags have antistatic shielding to reduce the possibility that data will be damaged or corrupted on the electronic media by electrostatic discharge (ESD). Each piece of evidence should be documented by a chain of custody form. The chain of custody documentation records where, when, and who collected the evidence, who subsequently handled it, and where it was stored. This establishes integrity and proper handling of evidence. When security breaches go to trial, the chain of custody protects an organization against accusations that evidence has either been tampered with or is different than it was when it was collected. Every person in the chain who handles evidence must log the methods and tools they used.
The evidence should be stored in a secure facility; this not only means access control, but also environmental control, so that the electronic systems are not damaged by condensation, ESD, fire, and other hazards.
Reporting
Digital forensics reporting summarizes the significant contents of the digital data and the
conclusions from the investigator's analysis. It is important to note that strong ethical principles must guide forensics analysis:
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Analysis must be performed without bias. Conclusions and opinions should be formed only from the direct evidence under analysis.
Analysis methods must be repeatable by third parties with access to the same evidence.
Ideally, the evidence must not be changed or manipulated. If a device used as evidence must be manipulated to facilitate analysis (disabling the lock feature of a mobile phone or preventing a remote wipe, for example), the reasons for doing so must be sound and the process of doing so must be recorded.
Defense counsel may try to use any deviation of good ethical and professional behavior to have the forensics investigator's findings dismissed.
A forensic examination of a device that contains electronically stored information (ESI) entails a search of the whole drive, including both allocated and unallocated sectors, for instance. E-discovery is a means of filtering the relevant evidence produced from all the data gathered by a forensic examination and storing it in a database in a format such that it can be used as evidence in a trial. E-discovery software tools have been produced to assist this process. Some of the functions of e-discovery suites are as follows:
Identify and de-duplicate files and metadata —many files on a computer system are "standard" installed files or copies of the same file. E-discovery filters these types of files, reducing the volume of data that must be analyzed.
Search —allow investigators to locate files of interest to the case. As well as keyword search, software might support semantic search. Semantic search matches keywords if they correspond to a particular context.
Tags —apply standardized keywords or labels to files and metadata to help organize the evidence. Tags might be used to indicate relevancy to the case or part of the case or to show confidentiality, for instance.
Security —at all points, evidence must be shown to have been stored, transmitted, and analyzed without tampering.
Disclosure —an important part of trial procedure is that the same evidence be made available to both plaintiff and defendant. E-discovery can fulfill this requirement. Recent court cases have required parties to a court case to provide searchable ESI rather than paper records.
Data Sources
The following table outlines several data sources for forensic investigations along with their descriptions.
Source
Description
Dashboards
An event dashboard provides a console to work from for day-to-day incident response. It provides a summary of information drawn from the underlying data sources to support some work tasks. Separate dashboards can be created to suit many different purposes. An incident handler's dashboard will contain uncategorized events that have been assigned to their account, plus visualizations (graphs and tables) showing key status metrics. A manager's dashboard would show overall status indicators, such as number of unclassified events for all event handlers.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Source
Description
Log data
Log data is a critical resource for investigating security incidents. As well as the log format, you must also consider the range of sources for log files, and know how to determine what type of log file will best support any given investigation scenario.
Host operating system logs
An operating system (OS) keeps a variety of logs to record events as users and software interact with the system. Different log files represent different aspects of system functionality. These files are intended to hold events of the same general nature. Some files hold events from different process sources; others are utilized by a single source only.
Linux logs
Linux logging can be implemented differently for each distribution. Some distibutions use syslog to direct messages relating to a particular subsystem to a flat text file. Other distributions use Journald as a unified logging system with a binary, rather than plaintext, file format. Journald messages are read using the journalctl command, but it can be configured to export some messages to text files via syslog.
Windows logs
The three main Windows event log files are the following:
Application—events generated by application processes, such as when there is a crash, or when an app is installed or removed.
Security—audit events, such as a failed login or access to a file being denied.
System—events generated by the operating system's kernel processes and services, such as when a service or driver cannot start, when a service's startup type is changed, or when the computer shuts down.
Application logs An application log file is simply one that is managed by an application rather than the OS. The application may use Event Viewer or syslog to write event data using a standard format, or it might write log files to its own application directories in whatever format the developer has selected.
In Windows Event Viewer, there is a specific application log, which can be written to by any authenticated account. There are also separate custom application and service logs, which are managed by specific processes. The app developer chooses which log to use, or whether to implement a logging system without using Event Viewer. Check the product documentation to find out where events for a particular software app are logged.
Endpoint logs
An endpoint log is likely to refer to events monitored by security software running on the host, rather than by the OS itself. This can include host-based firewalls and intrusion detection, vulnerability scanners, and antivirus/antimalware protection suites. Suites that integrate these functions into a single product are often referred to as an endpoint protection platform (EPP), enhanced detection and response (EDR), or extended detection and response (XDR). These security tools can be directly integrated with a SIEM using
agent-based software.
Network logs
Network logs are generated by appliances such as routers, firewalls, switches, and access points. Log files will record the operation and status of the appliance itself—the system log for the appliance—plus traffic and access logs recording network behavior.
IPS/IDS logs
An IPS/IDS log is an event when a traffic pattern is matched to a rule. As this can generate a very high volume of events, it might be appropriate to only log high sensitivity/impact rules. As with firewall logging, a
single packet might trigger multiple rules.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Source
Description
File
File metadata is stored as attributes. The file system tracks when a file was created, accessed, and modified. A file might be assigned a security attribute, such as marking it as read-only or as a hidden or system file. The ACL attached to a file showing its permissions represents another type of attribute. Finally,
the file may have extended attributes recording an author, copyright information, or tags for indexing/searching.
Web
When a client requests a resource from a web server, the server returns the resource plus headers setting or describing its properties. Also, the client can include headers in its request. One key use of headers is to transmit authorization information, in the form of cookies. Headers describing the type of data returned (text or binary, for instance) can also be of interest. The contents of headers can be inspected using the standard tools built into web browsers. Header information may also be logged by a web server.
Email
An email's Internet header contains address information for the recipient and sender, plus details of the servers handling transmission of the message between them. When an email is created, the mail user agent (MUA) creates an initial header and forwards the message to a mail delivery agent (MDA). The MDA
should perform checks that the sender is authorized to issue messages from the domain. Assuming the email isn't being delivered locally at the same domain, the MDA adds or amends its own header and then transmits the message to a message transfer agent (MTA). The MTA routes the message to the recipient, with the message passing via one or more additional MTAs, such as SMTP servers operated by ISPs or mail security gateways. Each MTA adds information to the header.
Headers aren't exposed to the user by most email applications. You can view and copy headers from a mail client via a message properties/options/source command. MTAs can add a lot of information in each received header, such as the results of spam checking. If you use a plaintext editor to view the header, it can be difficult to identify where each part begins and ends. Fortunately, there are plenty of tools available to parse headers and display them in a more structured format. One example is the Message Analyzer tool, available as part of the Microsoft Remote Connectivity Analyzer (
testconnectivity.microsoft.com/tests/o365
). This will lay out the hops that the message took more clearly and break out the headers added by each MTA.
System Memory Acquisition
System memory is volatile data held in Random Access Memory (RAM) modules. Volatile means that the data is lost when power is removed. A system memory dump creates an image file that can be analyzed to identify the processes that are running, the contents of temporary file systems, registry data, network connections, cryptographic keys, and more. It can also be a means of accessing data that is encrypted when stored on a mass storage device.
A specialist hardware or software tool can capture the contents of memory while the host is running. Unfortunately, this type of tool needs to be preinstalled as it requires a kernel mode driver to dump any data of interest. Various commercial tools are available to perform system memory acquisition on Windows.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Data Image Acquisition
Disk image acquisition refers to acquiring data from nonvolatile storage. Nonvolatile storage includes hard disk drives (HDDs), solid state drives (SSDs), firmware, other types of flash memory (USB thumb drives and memory cards), and optical media (CD, DVD, and Blu-ray). This can also be referred to as device acquisition, meaning the SSD
storage in a smartphone or media player. Disk acquisition will also capture the OS installation if the boot volume is included.
Given sufficient time at the scene, an investigator might decide to perform both a live and static acquisition. Whichever method is used, it is imperative to document the steps taken and supply a timeline and video-recorded evidence of actions taken to acquire the
evidence.
It is vital that the evidence collected at the crime scene conforms to a valid timeline. Digital information is susceptible to tampering, so access to the evidence must be tightly
controlled. Video recording the whole process of evidence acquisition establishes the provenance of the evidence as deriving directly from the crime scene.
To obtain a forensically sound image from nonvolatile storage, the capture tool must not
alter data or metadata (properties) on the source disk or file system. Data acquisition would normally proceed by attaching the target device to a forensics workstation or field
capture device equipped with a write blocker. A write blocker prevents any data on the disk or volume from being changed by filtering write commands at the driver and OS level.
Click one of the buttons to take you to that part of the video.
Investigating Security Incidents 00:00-05:36
When it comes to investigating security incidents, there are various guidelines to think about. First of all, one of the big ones is chain of custody. Whenever you have any sort of evidence, or data, you have to make sure you tell, you know, where it came from. And whose hands, "hands" it's been in. You know, where it's been. Because in a Court of Law, or even in a less formal situation than that, there'll be all sorts of questions
as to, okay, you have evidence, has it been tampered with? Has it been accidentally, you know, altered in some sort of way?
So the idea is that as you get evidence, you have to find some sort of way to bag and tag it. And, literally, there are times when there's a server that they know it was the scene of some form of crime. Child pornography or something like that. And they literally bag and tag that thing, so that they know nobody has gotten into it. Also, whenever you take a disk image or a network packet capture, or a memory dump, there are ways of marking that so that you know it has certain timestamps. So there are guidelines for taking this information, so that you have a proper chain of custody. But let
me give you a couple of examples of some of the procedures that happen, or some of the things that you can look at in the case of an attack.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In this case I've got a packet capture of the "eternal blue" attack. This is basically an attack against Microsoft-oriented systems, it happens on Port 445. So I have got a simple packet capture. This could be seen as evidence of a crime. And in fact this stuff that is in red, these packets that are red, wire shark's not an intrusion detective device, but it knows a strange packet when it sees it. This is actually the beginning of that attack
and you can even do an analysis of some of those statistics and do an IO graph and you can see where there's normal traffic and then there was a spike here. That's where the attacking system attacked the victim.
Once you have information like this, you can certainly save that packet capture. You can save it like I have. But in that case, you then would want to do something to make sure it has not been altered itself. You could take an MD5 sum or an SHA512 hash of that file. Those hashes are extremely sensitive to the contents of the file, so even if one little bit or byte or what have you has been changed, you know that the resulting hash would change, and then you know that that information has been altered in some way, and it's no longer valid. But, nevertheless, you find ways to keep this information as secure as possible.
There are additional sources that you'll look at, not just network traffic. Let's say that there has been some sort of attack, and you have actually taken an image either through the DD command, which is a Linux command that duplicates the disk, the disk duplicator, or you can grab an image in any sort of way. And what I've done is I've grabbed an image of an old Dell Latitude Notebook computer that was the result of a couple of attacks. And there's also some suspicious behavior on the part of the user. What I've done is I've loaded it into an application called Autopsy, Sleuth Kit creates it. I simply added a particular data source, I went to a disk imager file and I went
through here and was able to add this particular image here. The reason why I'm not showing you that process is it took quite some time for this application to go through and read that very large disk. But now that I've mounted that disk, I can go in and do basically a forensic analysis of what happened on this particular volume.
So you can see I'm taking a look at a Windows system here. You can just tell by the program files, the recycler. I can now go in and I can determine what has happened here. What has been deleted, 'cause some of these files, whether they be images or sound files or dynamic libraries, they can help me sleuth my way to determine what type
of attack has happened. Or what this person using this disk has been doing. You know, are there particular videos that could be incriminating in any way? Or are there basically
users that have been added that should not have been added?
But once again, when you take an autopsy as it were of a disk, you have to make sure that you know its provenance. You have to know where this information came from. So there are various tools that you can use from wire shark at the network level, there's this
autopsy application, there's the forensic toolkit, open source thing, there's Encase and all of its competitors. All of these types of tools can help you conduct forensic analysis of what has happened. And in many cases you don't have the time to do that kind of forensic analysis. You simply need to get that system back up and running, restore from
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
back up or another image, what have you. But when you need to do that kind of forensic
analysis, there are certain procedures that you do have to follow.
9.4.1 Redundancy
Click one of the buttons to take you to that part of the video.
Redundancy 00:00-00:29
Security architecture resilience refers to the design and implementation of systems and networks in a way that allows them to withstand and recover quickly from disruptions or attacks. This includes redundancy, fail-safe mechanisms, and robust incident response plans. By building resilience into the security architecture, cybersecurity teams ensure that even if a breach occurs, the impact is minimized, and normal operations can be restored quickly.
Redundancy Strategies 00:29-00:46
Redundancy strategies are essential to disaster recovery and business continuity planning. These strategies include continuity of operations planning, which involves developing processes and procedures to ensure critical business functions can continue during and after a disruption. Let's
look at some of these strategies.
High Availability 00:46-01:32
High availability (HA) clustering uses redundant systems that can automatically take over operations in case of failure, minimizing downtime. High availability (HA) is crucial in IT infrastructure, ensuring systems remain operational and accessible with minimal downtime. It involves designing and implementing hardware components, servers, networking, data centers, and physical locations for fault tolerance and redundancy. In a high-availability setup, redundant hardware components, such as power supplies, hard drives, and network interfaces, reduce the risk of failure by allowing the system to continue functioning if one component fails. Servers are often deployed in clusters or paired configurations, which allows automatic failover from a primary server to a secondary server in case of an issue.
Power Redundancy 01:32-02:04
Power redundancy ensures critical infrastructure, such as data centers, has backup power sources to continue operations during an outage. All types of computer systems require a stable power supply to operate. Electrical events, such as voltage spikes or surges, can crash computers and network appliances, while loss of power from under-voltage events or power failures will cause equipment to fail. Power management means deploying systems to ensure that equipment is protected against these events and that network operations remain uninterrupted or recover quickly.
Vendor Diversity 02:04-03:15
Vendor diversity is essential for several reasons, offering benefits not only in terms of cybersecurity but also in business resilience, innovation, and competition. Relying on a single vendor for all software and hardware solutions can create a single point of failure. The entire infrastructure may be
at risk if a vulnerability is discovered in that vendor's products. Vendor diversity mitigates the risk
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
associated with vendor lock-in. It ensures that an organization's operations aren't solely reliant on one vendor's products or services. Diverse vendors bring different perspectives, ideas, and technologies.
Vendor diversity promotes healthy competition in the market, which can lead to better pricing, improved product features, and higher-quality customer support. Different vendors offer unique solutions that cater to specific needs, and having a diverse vendor ecosystem allows organizations to choose the best fit for their requirements. Vendor diversity helps spread the risk associated with
potential product or service failures, security breaches, and other issues. In some industries, regulations or standards may require organizations to maintain vendor diversity to ensure compliance and reduce the risk of supply chain disruptions or security breaches.
Defense-In-Depth 03:15-04:13
Defense-in-depth is a comprehensive cybersecurity strategy that emphasizes the implementation of multiple layers of protection to safeguard an organization's information and infrastructure. This approach is based on the principle that no single security measure can completely protect against all
threats. By deploying a variety of defenses at different levels, organizations can create a more resilient security posture that can withstand a wide range of attacks. For example, a defense-in-
depth strategy might include perimeter security measures such as firewalls and intrusion detection systems to protect against external threats. Organizations can implement segmentation, secure access controls, and traffic monitoring at the network level to prevent unauthorized access and contain potential breaches. Endpoint security solutions, such as antivirus software and device hardening, help protect individual devices. At the same time, regular patch management ensures that software vulnerabilities are addressed promptly.
Redundancy Testing 04:13-05:21
Regular testing, including tabletop exercises, failover tests, and simulations, is essential to identify vulnerabilities, evaluate response plans, and improve redundancy measures. By incorporating redundancy strategies, organizations can reduce risks, minimize downtime, and ensure the continuity of their critical business functions.
Testing high availability, load balancing, and failover technologies is critical. It assesses the ability to remain operational during heavy workloads, component failures, or scheduled maintenance. Load testing incorporates specialized software tools to validate a system's performance under expected or
peak loads and identify bottlenecks or scalability issues.
Failover testing focuses on validating failover processes to ensure a seamless transition between primary and secondary infrastructure. Testing monitoring systems validate effective detection and response to failures and performance issues. Robust testing practices allow organizations to ensure that high availability, load balancing, and failover technologies effectively fulfill their purpose to minimize unexpected outages and maximize performance.
Summary 05:21-05:37
That's it for this lesson. In this lesson, we discussed various redundancy strategies, including high availability, power redundancy, vendor diversity, and defense in depth. We reviewed the importance of testing high availability, load balancing, and failover technologies.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.4.2 Redundancy Facts
This lesson covers the following topics:
Implement secure network designs
Manage redundant power options
Implement Secure Network Designs
The following table describes secure network designs.
Secure Network
Designs
Description
Load balancing
A process that distributes processing among multiple nodes.
Active/active
Two load balancers working in tandem to distribute network traffic.
Active/passive
Two load balancers with one actively working and the second in listening mode to take over if the first one becomes unavailable.
Power scheduling
Power scheduling is used to configure an active redundancy. This sends power to networks when a power facility goes down. Power scheduling prevents total loss of power during catastrophic events.
Virtual IP (VIP)
An IP address that is not assigned to an endpoint. VIP is used for load balancing. It typically uses NAT
IP address assignment.
Geographic dispersal
The use of multiple locations to store data to mitigate downtime due to location.
Multipath
A fault-tolerance technique that gives multiple physical paths between a CPU and a mass-storage appliance.
Manage Redundant Power Options
Redundant power options are vital. A network without power is useless. Common power
options found in datacenters include:
Uninterrupted power supply (UPS). A UPS is a stand-alone bank of batteries that allows for the graceful shutdown of network appliances when power goes out.
Generator. A generator is a large scale device that provides power for an extended period of time. Normally between 24 and 48 hours.
Dual supply. A dual power supply is common in network appliances like servers and firewalls. It allows for one failure and hot-swapping.
Managed power distribution unit (PDU). A managed power distribution unit is a rack-
mounted unit that distributes power on a large scale such as a data center.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.4.3 Hardware Clustering
Click one of the buttons to take you to that part of the video.
Clustering 00:00-00:27
I'll spend a few minutes talking about clustering. Clustering can be an effective way to implement a disaster-recovery plan as well as a way to improve your productivity. A cluster is a group of interconnected servers, also known as nodes, that appear to be a single system to the operating environment. Although clustering can be done on virtual machines, I'll focus this lesson on the unique characteristics of clustering on physical computers.
Clustering Benefits 00:27-01:10
Hardware clustering provides several benefits. For example, when you use clustering, the throughput and response time are dramatically improved. Since the cluster's nodes appear to be one
system, if one node fails, the others in the cluster still provide the services you need and redistribute the workload among the remaining servers. To provide additional performance, more nodes can be added. Theoretically, there's no limit to the number of nodes you can have in your cluster. But you must have software that supports clustering. This software could be built into the operating system itself, such as with Windows Server 2019. In other cases, you may have to purchase a program to help you set up and manage your clusters.
With that introduction, I'll show you how clustering works.
Clustering Implementation 01:10-02:09
In a typical clustering implementation, there are at least two nodes. They can be connected in multiple ways in order to act seamlessly with each other. First, since the nodes provide services to the workstations that reside on the production network, the clustered nodes are directly connected to
the production network. For example, a user at this workstation can access the services on the clustered nodes through the production network.
To increase performance, clustered nodes often have a second network card, allowing them to also be connected to each other through a dedicated network. As you can see here, this network is isolated from the production network. This means that the clustered nodes connected to this dedicated network don't have to compete for bandwidth with the production traffic. When you work with high-availability clusters, this network is also referred to as a heartbeat network. I'll talk more about that in a bit. But, keep in mind that although this is the ideal setup, you don't have to use this second dedicated network. Instead, you could choose to do your clustering over the production network.
Common Storage 02:09-02:49
Depending on the cluster type you choose and the cluster's purpose, the nodes in your cluster could also share a common storage that's accessible through a storage area network, or SAN. When
used, a SAN is often connected to the cluster using fiber-channel connections, which are fiber optic. These connections allow the devices to communicate very quickly with the shared storage. In this setup, the shared storage appears to the operating system as if it were storage installed within the server itself. But in reality, both servers share the same disk storage. This has important
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
implications. It means that whenever Server A writes information to the SAN, it's immediately available to Server B and vice versa.
High-Availability Clusters 02:49-04:50
When planning your cluster, keep in mind that there are a few different types of clusters you can implement.
One commonly used cluster is called a high-availability cluster, or HA cluster. This type of cluster is also known as a failover cluster. The idea behind this specific cluster type is to eliminate downtime when a computer system in the cluster fails. Although the most common size of an HA cluster is two nodes, an HA cluster can have many more.
A high-availability cluster typically uses what's known as an active/passive configuration. With this type of configuration, the active server, or primary server, provides the services to the production network while the passive server, or standby server, waits in the background. If the primary server fails, the passive server becomes the new active server and provides the services to the production network. In this type of configuration, the passive node must be a fully redundant instance of the active node and use the same shared storage. This way, any node in the cluster has access to the same data.
To monitor when the passive server should take over, HA clusters make sure that the other servers in the cluster are alive by sending what are called heartbeats over a dedicated heartbeat network. For example, Server A, our primary server, continually lets Server B know that it's up and running by sending these heartbeats. If Server A fails, Server B no longer hears a heartbeat coming from Server A and assumes that Server A has gone down. It immediately takes over and becomes the new active server. This is possible because both servers use the same shared storage.
Depending on how close together your heartbeat intervals are set, it may take only a few seconds for a passive server to start providing the services that the active server was providing. As such, the user usually notices little to no downtime. But, keep in mind that whatever was in RAM on the failed server is lost. If the server that failed is fixed and brought back online, it becomes the passive server and listens for heartbeats from the current active server.
Load-Balancing Clusters 04:50-06:14
Another type of cluster you can use is called a load-balancing cluster, which works differently from a high-availability cluster. In a load-balancing cluster, all nodes are always active participants. This is known as an active/active configuration. In this type of cluster, all computers share in the processing workload. In a way, you can think of a load-balancing cluster as a type of supercomputer system. In other words, the processing tasks are distributed among all the nodes within the cluster. Companies that provide web server access to a large clientele typically implement load-balancing clusters to assign the many different queries to different nodes. This optimizes the responses to these requests.
Let's look at an example.
First, notice that instead of using a SAN to provide a common disk storage, each computer has its own disk storage. This isn't a requirement of a load-balancing cluster, but since a SAN can be expensive, some companies might not choose to use them. Still, using a SAN is probably the fastest
and most effective way to implement a cluster when feasible.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In some cases, load-balancing clusters might also have a separate device known as a load balancer, which is used to determine which cluster node gets the current request. Load balancing uses an algorithm to determine which server in the cluster should service the request.
Similar to the example, some implementations use a round-robin approach where Server A gets the first request, Server B gets the second, and Server A gets the third.
Cluster Linking 06:14-06:46
The systems in a load-balancing cluster can be loosely linked or tightly linked. The tighter the link, the more they act as one computer system. In a loosely linked cluster, each system operates autonomously, but also in conjunction with the other systems at the same time. In a tightly linked system, the systems function as one system called a supercomputing cluster. These systems pool their CPU and storage resources, and they might even pool their memory together so that various processing tasks are distributed between the cluster members.
Hardware Compatibility 06:46-07:29
A key thing to remember when you implement clustering is that the more tightly integrated the systems, the more identical the hardware needs to be.
If you're using a loosely linked cluster, you can use hardware that's slightly more disparate. For example, you can use servers from different manufacturers. But, to implement a tightly linked cluster, the systems need to be identical. So, they should be from the same manufacturer, the
same make and model, same processor, same amount of storage, same amount of RAM, and so on.
I don't have time to go into specific clustering implementations on various operating systems. Just be
aware that most of your commonly used network operating systems do have some type of clustering solution available, whether it's built into the product itself or whether it's available from a third party.
Summary 07:29-07:48
That's it for this lesson. In this lesson, we gave you an overview of how clustering works. We talked about the role of a cluster and several clustering implementations. Then we looked at high-
availability and load-balancing cluster types, and we talked about how you create a supercomputing cluster, or tightly linked cluster, from a load-balancing cluster.
9.4.4 Clustering Facts
This lesson covers the following topics:
Virtual IP
Application clustering
Where load balancing distributes traffic between independent processing nodes, clustering allows multiple redundant processing nodes that share data with one another
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
to accept connections. This provides redundancy. If one of the nodes in the cluster stops working, connections can fail over to a working node. To clients, the cluster appears to be a single server. A load balancer distributes client requests across available server nodes in a farm or pool. It is generally associated with managing web traffic, whereas clusters provide redundancy and high availability for systems such as databases, file servers, etc.
Virtual IP
For example, an organization might want to provision two load balancer appliances so the other can handle client connections if one fails. Unlike load balancing with a single appliance, the public IP used to access the service is shared between the two instances
in the cluster. This arrangement is called a virtual IP or shared or floating address. The instances are configured with a private connection, on which each is identified by its "real" IP address. This connection runs a redundancy protocol, such as Common Address Redundancy Protocol (CARP), enabling the active node to "own" the virtual IP and respond to connections. The redundancy protocol also implements a heartbeat mechanism to allow failover to the passive node if the active one should suffer a fault.
With active/passive clustering, if one node is active, the other is passive. The most significant advantage of active/passive configurations is that performance is not adversely affected during failover. However, there are higher hardware and operating system costs because of the unused capacity.
An active/active cluster means that both nodes are processing connections concurrently.
This allows the administrator to use the maximum capacity from the available hardware while all nodes are functional. In the event of a failover, the workload of the failed node is
immediately and transparently shifted onto the remaining node. At this time, the workload on the remaining nodes is higher, and performance is degraded.
Application Clustering
Clustering is also very commonly used to provision fault-tolerant application services. If an application server suffers a fault in the middle of a session, the session state data will be lost. Application clustering allows servers in the cluster to communicate session information to one another. For example, if a user logs in on one instance, the next session can start on another, and the new server can access the cookies or other information used to establish the login.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
9.5.1 Backup Types
Click one of the buttons to take you to that part of the video.
Backup and Restore 00:00-00:37
In this lesson, we'll discuss backing up data.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Backing up your data is absolutely critical. It must be done consistently, and strategically. It's also essential to verify backups to ensure that they actually work and you can restore your data.
You need to know three different types of backups. The first is the full backup, the second is the incremental backup, and the last is the differential backup. But before we dive into these, let's quickly
talk about a key component used with backups called the archive bit.
Archive Bit 00:37-01:01
The archive bit is a file attribute that is either on or off. It's used by backup systems to determine if the file has been archived—"that is, has been backed up. Also, archive bit identifies if a file has been
modified since the last backup. This is important when used with certain types of backups.
With that, let's explore the different types of backups.
Full Backup 01:01-01:53
Let's first discuss a full backup. A full backup backs up everything. It doesn't matter whether the archive bit is set or not on any file. However, as it backs up each file, it clears the archive bit to indicate that the file is backing up.
In this way, you can determine if a file has been modified at a later point in time. If you were to modify a file that has been backed up, the archive bit would be reset, indicating to the backup software that this file has been modified since the last time it was backed up.
If a file is not modified after the full backup, the archive bit is left in a cleared state. This is important because other types of backups, such as incremental, take into consideration if this archive bit has been cleared when deciding whether it should back up a file or not.
Incremental Backup 01:53-05:04
An incremental backup backs up everything since the last full backup or since the last incremental backup. To determine whether or not a file has been modified since the last full or incremental backup, it looks at the archive bit. If the archive bit is turned on, that tells the backup software that the file has been modified since the last full or incremental backup, and needs to be backed up again. Once it's backed up the file, the archive bit is cleared.
Let's run through an example. Suppose you perform a full backup of the entire file system on Monday. As a result, every file in the file system has been backed up, and the archive bit on each file
is cleared. Then, on Tuesday, you perform an incremental backup. Because the incremental backup only backs up files with the archived bit set, it will only backup files that have changed since Monday. Accordingly, the incremental backup you made on Tuesday will be relatively small because
it will only contain one day's worth of changes. After backing up the changed files, the incremental backup will clear the archive bit again on the files that were backed up.
If you perform another incremental backup on Wednesday, it will once again look for any files with enabled archive bits, indicating that they've been modified since either the last full backup on Monday or since the last incremental backup on Tuesday. Therefore, Wednesday's incremental backup also only contains one day's worth of changes. Likewise, if you run another incremental
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
backup on Thursday, then only the changes that have been made since Wednesday will be backed up. The same is true on Friday.
The advantage of this strategy is that the daily incremental backups finish relatively quickly because we're only backing up one day's worth of changes. The full backup still takes some time to complete because all files are being backed up. For this reason, the full backup is usually executed over the weekend when the system is not heavily used. The incremental backups can occur each weekday. Because they don't take much time to complete, they typically don't interfere with day-to-day work.
Incremental backups have one significant drawback. The incremental backup strategy is the slowest type of backup for restoring data. For example, suppose we ran a full backup on Monday and then ran an incremental on Tuesday, Wednesday, Thursday, and Friday. Then the server crashes on Saturday, and we need to restore all the data back to the server after we've recovered from the crash.
The first thing that you must do is restore the first full backup from Monday. Then you must restore every incremental backup in the order they were created. First, you must restore Monday's incremental backup, followed by Tuesday's backup, then Wednesday, Thursday, and Friday's backups. Each backup must be restored in the proper sequence to return the system to where it was
before the crash. This can take hours or even days to complete, depending how much data is involved.
Differential Backup 05:04-07:16
Another option is to use differential backups instead of incremental. A differential backup looks for files that have been modified since the last full backup. It does this by evaluating if the archive bit has been set on the files. If the archive bit is clear, it will not back up the file because it assumes that
it has not been changed since the last full backup. If the archive bit is set, the differential backup assumes the file has been modified and needs to be backed up.
The key difference is that a differential backup does not clear the archive bit after backing up a file. Therefore, it backs up everything modified since the last full backup, but not since the last differential backup. This has advantages and disadvantages.
Suppose you perform a full backup on Monday just as with the incremental backup strategy. This clears all archive bits on all files. Then, we perform a differential backup on Tuesday and Wednesday. Because the differential backup on Tuesday does not clear any archive bits, Wednesday's differential backs up all the files that have been modified since Monday. Thursday's differential backup includes all the changes made on Monday, Tuesday, and Wednesday. Friday's differential backup includes all the changes made on Monday, Tuesday, Wednesday, and Thursday.
As you progress through the backup schedule, each differential backup takes progressively longer because it's backing up more data each time. The first differential backup completes in the same amount of time as an incremental would because we're only backing up one day's worth of
data. However, each subsequent differential takes longer.
The advantage of differential backups is the speed of restoring data. With differentials, all we do is restore the full backup followed by the last differential backup. For example, if a server were to crash
on Saturday, we would restore the full backup from Monday and then the last differential backup
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
from Friday. That's all we do because the last differential backup contains a backup of every single file that has been changed since the last full backup.
Mixing Backup Types 07:16-08:05
You should never mix differential and incremental backups together with a full backup. If you mix full backups with incremental and differential backups, you're going to have problems because of the way these different types of backups handle the archive bit. As a result, files that should have been backed up will be missed.
Another alternative for backing up data is to create a system image. Imaging programs create a bit-
level mirror of a particular hard disk or partition.
Newer versions of Windows, and all versions of Linux, include this functionality. Typically, a system image is created on a defined schedule. Because a system image backs up the entire hard disk or partition, they are rather slow to create. However, they are very fast to restore.
System Images 08:05-08:51
All you do is restore the image to the same hardware, or even to a new piece of hardware if you have a major hardware malfunction, and the system is back and available again in exactly the state it was when the image was created. This can be very useful in the event of a malware infection. Most of the time it's easier to re-image a machine that's been infected with malware rather than try to get rid of the malware.
There are other types of data to back up. For example, if you're managing a domain controller in a Windows network, then you need to backup not only the files on the server but all your active directory information as well. To do this, you can create a special type of backup on the domain controller called a system state backup.
System State Backup on a Domain Controller 08:51-09:24
If you are backing up sensitive information, such as your security logs, then you might want to consider the location of the backup. Instead of backing that data up to a device that can be modified in some way, find a more secure option. For example, it could be problematic if you were to back up sensitive information to a network share or perhaps a USB flash drive because of the potential of tampering. Instead, you should back up confidential information to a write-once type of media, such as a recordable DVD.
Backup Integrity Tests 09:24-09:39
A backup is worthless unless you can successfully restore data from it. You should run periodic tests
and restore from your backups in a lab environment, just to make sure that everything works should a crisis occur.
Summary 09:39-10:12
That's it for this lesson. In this lesson, we talked about backup and restore. We first discussed the importance of backing up your data. We then examined the three backup strategies: full, incremental, and differential. We then mentioned using a system image to backup data. We talked
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
about backing up the system state on the domain controller. We discussed backing up sensitive information to write-once medium. Then we emphasized the importance of verifying backups to make sure that they actually work.
9.5.2 Backup Storage Options
Click one of the buttons to take you to that part of the video.
Backup Storage Options 00:00-00:23
Backups play an essential role in asset protection by ensuring the availability and integrity of an organization's critical data and systems. By creating copies of important information and storing them securely in separate locations, backups are a safety net in case of hardware failure, data corruption, or cyberattacks such as ransomware.
Backup Frequency 00:23-01:09
Many dynamics influence data backup frequency requirements, including data volatility, regulatory requirements, system performance, architecture capabilities, and operational needs. Organizations with highly dynamic data or stringent regulatory mandates may opt for more frequent backups to minimize the risk of data loss and ensure compliance. Conversely, businesses with relatively stable data or less rigorous regulatory oversight might choose less regular backups, balancing data protection, data backup costs, and maintenance overhead. Ultimately, the optimal backup frequency is determined by carefully assessing an organization's regulatory requirements, unique needs, risk tolerance, and resources.
On-Site and Off-Site Backups 01:09-02:25
The need for on-site and off-site backups must be balanced, as they're crucial in securing critical data and ensuring business continuity. On-site backups involve storing data locally—in the same location as the protected systems—on devices such as hard drives or tapes to provide rapid access and recovery in case of data loss, corruption, or system failures. On the other hand, off-site backups involve transferring data to a remote location to ensure protection against natural disasters, theft, and other physical threats to local infrastructure, as well as catastrophic system loss resulting from ransomware infection.
Ransomware poses a significant threat to businesses and organizations by encrypting vital data and demanding a ransom for its release. In many cases, ransomware attacks also target backup infrastructure, hindering recovery efforts and further exacerbating the attack's impact. Perpetrators often employ advanced techniques to infiltrate and compromise both primary and backup systems, rendering them useless when needed. Organizations can implement several strategies to defend against this risk, such as maintaining air-gapped backups physically disconnected from the network, thereby actively preventing ransomware from accessing and encrypting them.
Recovery Validation 02:25-03:43
Critical recovery validation techniques play a vital role in ensuring the effectiveness and reliability of backup strategies. Organizations can identify potential issues and weaknesses in their data recovery
processes by testing backups and making necessary improvements.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
One common technique is the full recovery test, which involves restoring an entire system from a backup to a separate environment and verifying the fully functional recovered system. This method
helps ensure that all critical components, such as operating systems, applications, and data, can be restored and function as expected.
Another approach is the partial recovery test, where selected files, folders, or databases are restored
to validate the integrity and consistency of specific data subsets. Organizations can perform regular backup audits, checking the backup logs, schedules, and configurations to ensure backups are created and maintained as intended and required.
Furthermore, simulating disaster recovery scenarios, such as hardware failures or ransomware attacks, provides valuable insights into an organization's preparedness and resilience. Recovery validation strategies are essential because backups can be completed with "100% success" but mask issues until the backup set is used for recovery.
Encrypting Backups 03:43-04:39
Encryption of backups is essential for various reasons, primarily data security, privacy, and compliance. By encrypting backups, organizations add an extra layer of protection against unauthorized access or theft, ensuring that sensitive data remains unreadable without the appropriate decryption key. This is particularly crucial for businesses dealing with sensitive customer
data, intellectual property, or trade secrets, as unauthorized access can lead to severe reputational damage, financial loss, or legal consequences.
Copies of sensitive data stored in backup sets are often overlooked, so many industries and jurisdictions have regulations that mandate the protection of sensitive data stored in backups. Encrypting backups helps organizations meet these regulatory requirements and avoid fines, penalties, or legal actions resulting from noncompliance.
Summary 04:39-04:58
That's it for this lesson. In this lesson, we talked about data backups and backup frequency. We discussed the value of maintaining on-site and off-site backups. We talked about identifying potential
issues and weaknesses by performing recovery validation. We also discussed the importance of encrypting backups.
9.5.3 Backup Types and Storage Facts
This lesson covers the following topics:
Enterprise backups
Data deduplication
Backup frequency
Snapshots
Replication and journaling
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Enterprise Backups
In an enterprise setting, simple backup techniques often prove insufficient to address large organizations' unique challenges and requirements. Scalability becomes critical when vast amounts of data must be managed efficiently. Simple backup methods may struggle to accommodate growth in data size and complexity.
Performance issues caused by simple backup techniques can disrupt business operations because they slow down applications while running and typically have lengthy recovery times. Additionally, enterprises demand greater granularity and customization to target specific applications, databases, or data subsets, which simple techniques often fail to provide.
Compliance and security requirements necessitate advanced features such as data encryption, access control, and audit trails that simplistic approaches typically lack. Moreover, robust disaster recovery plans and centralized management are essential for an enterprise backup strategy. Simple backup techniques might not support advanced features like off-site replication, automated failover, or streamlined management of the diverse systems and geographic locations that comprise a modern organization's information technology environment.
Critical capabilities for enterprise backup solutions typically include the following features:
Support for various environments (virtual, physical, and cloud)
Data deduplication and compression to optimize storage space
Instant recovery and replication for quick failover
Ransomware protection and encryption for data security
Granular restore options for individual files, folders, or applications
Reporting, monitoring, and alerting tools for effective management
Integration with popular virtualization platforms, cloud providers, and storage systems
Data Deduplication
Data deduplication describes a data compression technique that optimizes storage space by identifying and eliminating redundant data. It works by analyzing data blocks within a dataset and comparing them to find identical blocks. Instead of storing multiple copies of the same data, deduplication stores a single copy. It creates references or pointers to that copy for all other instances. Deduplication can be performed at different levels, such as file-level, block-level, or byte-level. Deduplication significantly minimizes storage requirements and improves data transfer efficiency, particularly in backup and data replication processes, by reducing the amount of duplicate data stored.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Backup Frequency
Many dynamics influence data backup frequency requirements, including data volatility, regulatory requirements, system performance, architecture capabilities, and operational needs. Organizations with highly dynamic data or stringent regulatory mandates may opt for more frequent backups to minimize the risk of data loss and ensure compliance. Conversely, businesses with relatively stable data or less stringent regulatory oversight might choose less frequent backups, balancing data protection, data backup costs, and maintenance overhead. Ultimately, the optimal backup frequency is determined by carefully assessing an organization's regulatory requirements, unique needs, risk tolerance, and resources.
Snapshots
Snapshots play a vital role in data protection and recovery, capturing the state of a system at a specific point in time. Virtual Machine (VM), filesystem, and Storage Area Network (SAN) snapshots are three different types, each targeting a particular level of the storage hierarchy.
VM snapshots, such as those created in VMware vSphere or Microsoft Hyper-V, capture the state of a virtual machine, including its memory, storage, and configuration settings. This allows administrators to roll back the VM to a previous state in case of failures, data
corruption, or during software testing.
Filesystem snapshots, like those provided by ZFS or Btrfs, capture the state of a file system at a given moment, enabling users to recover accidentally deleted files or restore
previous versions of files in case of data corruption.
SAN snapshots are taken at the block-level storage layer within a storage area network. Examples include snapshots in NetApp or Dell EMC storage systems, which capture the state of the entire storage volume, allowing for rapid recovery of large datasets and applications.
Replication and Journaling
Replication and journaling are data protection methods that ensure data availability and integrity by maintaining multiple copies and tracking changes to data. Replication involves creating and maintaining exact copies of data on different storage systems or locations. Organizations can safeguard against data loss due to hardware failures, human errors, or malicious attacks by having redundant copies of the data. In the event of a failure, the replicated data can be utilized to restore the system to its original state.
A practical example of replication is database mirroring, where an organization maintains primary and secondary mirrored databases. Any changes made to the primary database are automatically replicated to the secondary database, ensuring data
consistency and availability if the primary database encounters any issues.
On the other hand, journaling records changes to data in a separate, dedicated log known as a journal. Organizations can track and monitor data modifications and revert
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
to previous states if necessary. Journaling is beneficial for data recovery in system crashes. After completing the full system backup, it enables the system to identify and undo any incomplete transactions that might have caused inconsistencies or replay transactions. This provides greater granularity for restores and greatly minimizes data loss. A practical example of journaling is using file system journaling, such as the Journaled File System (JFS) or the New Technology File System (NTFS), with journaling enabled. These file systems record all changes made to files, allowing for data recovery and consistency checks after unexpected system shutdowns or crashes.
Remote journaling, SAN replication, and VM replication are advanced data protection methods that maintain data availability and integrity across multiple locations and systems. Remote journaling creates and maintains a journal of data changes at a separate, remote location, allowing for data recovery and ensuring business continuity in case of local failures, natural disasters, or malicious attacks.
SAN replication duplicates data from one SAN to another in or near real-time, providing redundancy and protection against hardware failures, human errors, or data corruption. This technique involves synchronous replication, which guarantees data consistency, and asynchronous replication, which is more cost-effective but slightly less stringent in consistency.
Meanwhile, VM replication creates and maintains an up-to-date copy of a virtual machine on a separate host or location, ensuring that a secondary VM can quickly take over the workload in the event of a primary VM failure or corruption. By implementing these methods, organizations can bolster their data protection strategies, safeguarding against various risks and ensuring the availability and integrity of their critical data and systems.
9.5.4 Configure Network Attached Storage
Click one of the buttons to take you to that part of the video.
Configure a NAS for Data Backups 00:00-00:35
Network attached storage, or NAS, is a file-level storage device on your network. These devices can
be useful when you share data or even back it up. Generally, there'll be several drives in the NAS device for redundancy. But if you want to make sure data isn't lost due to drive failure, a cloud sync is suggested.
TrueNAS, which is also known as free NAS, is open-source software. You can install this software on a physical server or a virtual machine.
Set Up NAS 00:35-01:44
TrueNAS makes it very easy to set up, as all we've done up to this point is install it, set an IP, and set a root password.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In order to use any disks for storage, we must configure a storage pool. To do so, go to 'Storage' and then 'Pools'. Then click 'ADD' and 'CREATE POOL'. Next, we're going to give it a name and select the two available disks we have. Most likely, a production NAS will have more storage and more available disks then what we're showing. Since there are only two disks, we're going to set up a 'Mirror' vs. Stripe. Click 'CREATE', 'Confirm', and 'CREATE POOL'.
Now that our pool is set up, we must create a Windows share. Go to 'Sharing' and then 'Windows Shares (SMB)'. The drop-down menu at the top of the main folder will be selected since that was the
name set on the storage pool. Nothing else is needed, but you could add a description if you'd like for documentation. Click 'SUBMIT'. A box will appear asking you to enable the SMB service since it wasn't on before.
Configure a User 01:44-02:42
Next, a user needs to be set up so some form of authentication can happen to access this newly created share. We'll set up a basic user under 'Accounts' and then 'Users'. Click 'ADD'. Our name here will be 'backup'. The same name will be used for the username as well. Enter a password and confirm it. As we scroll down, we need to give this user permission to our new share directory. Drop down the folder and select 'main'. All of the home directory permissions are fine for now, so we'll leave them. One thing is for certain - this user doesn't require shell login, so we want to disable this for security purposes. If you click over here and select 'nologin', it'll take care of this problem. Click 'SUBMIT' when finished. Our configuration is complete on our NAS, so were going to hop over to our
Windows machine to configure the rest.
Map Network Drive 02:42-04:07
In order to have this NAS easily accessible, we're going to map it as a network drive. To do so, go to
'File Explorer'. As we expand this, we're going to go to 'This PC'. Next, the 'Computer' tab at the top will give us the option to map a network drive. Select a drive letter of your choosing. The path in the folder will be the IP address or DNS name for the NAS. We're going to use the IP as we don't have DNS set up for this right now. Type '\\192.168.30.106', as this was the IP of our NAS. Now click 'Browse'. It'll take a second, and you might have to click on the device for the username and password prompt to come up. Enter the username and password we set up for the backup user. The
key thing is to check 'Remember my credentials', otherwise this mount will be asking for the password every time your computer reboots. Click 'OK' and then select the 'backup' folder. If you notice, TrueNAS creates a folder with the same username as the user, unless changed. Once we click 'Finish', our new mount will be mounted as the N: drive. Don't worry about the rest of the files in this directory, as this pertains to the username associated with the TrueNAS side. Since we may have multiple computers backing up their data here, we'll just name a folder 'computer_1'.
Set File History 04:07-05:00
Now we can go set up some backup jobs. You do have the ability to use other backup solutions. However, we're just going to use the built-in File History. When we type 'backup' in the search bar here, it brings us to our backup settings. Notice when we click on 'Add a drive', it says no usable drives found. This doesn't mean it's broken. Typically, this menu looks for secondary or removable drives that are local. If we select 'More options' and 'See advanced settings', we get the option to map file history to a network location. We can then navigate to our mapped drive by going to 'This PC', clicking on the 'N:' drive, and going to the 'computer_1' folder. Click 'OK' and then after that we should be able to turn on our 'File History' now.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Summary 05:00-05:11
That's it for this demo. In this demo, we showed you how to configure a NAS, set up an SMB share, and configure a mapped network drive for File History backups.
9.5.5 Implementing File Backups
Click one of the buttons to take you to that part of the video.
Create Backups in Windows 00:00-00:12
In this demonstration, we're going to look at two ways to back up files on a Windows system—using File History and using the legacy Backup and Restore utility.
File History 00:12-01:19
Let's take a look at setting up File History first. File History saves copies of your files so they can be recovered later if needed. Using the Windows search button, I'll look for file history and click to start it. As you can see, it's currently turned off. There's only one button to choose from. Once it's turned on, you can select Run now to start an immediate backup of the listed locations. As you can see, the
H: drive is currently selected as the storage device.
To the left, additional options are available. In addition to the Restore personal files option, which is used for retrieving copies after they've been backed up, there's Select drive. When clicked, the available devices are listed. In this case, only the H: drive is available. There's also an option to add a network location.
Another important setting is Exclude folders. Currently, the Pictures library is excluded from copies being kept. The Add and Remove buttons can be used to change this list. In Advanced settings, we can change how often copies are made and how many versions to keep. The defaults are Every hour and Forever, respectively.
Backup and Restore (Windows 7) 01:19-02:32
The legacy Backup and Restore tool is designed to hold copies of data or an entire system as a disk image. It's recommended that the device holding the backups be external, or at least on a different storage device than the C: drive. In this case, I have a second device prepared. I'll browse to the Backup and Restore utility found under Control Panel. The tool's full name is Backup and Restore (Windows 7) since this is a legacy tool from the Windows 7 days. Once launched, I'll select Set up backup and then select the appropriate device. The H: drive is the device I intend to perform backups to, so I don't need to make any changes. I could, if necessary, click on the Save on
a network button and browse to a shared location for storage. I'll leave it as the H: drive and click Next.
At this point, the system wants me to choose which files and folders to back up. Since the default settings are good for our purposes in this demo, I'll leave it on Let Windows choose and click Next. On the next screen, we see a review of the options we've selected and a new Change schedule link. Currently, the backup is run on Sunday at 7:00 PM, which is good for our needs. To finish the configuration, we click Save settings and run backup.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Summary 02:32-02:44
That's it for this demonstration on backups. We covered two ways to back up files. First, we looked at backing up using File History, and then we looked at the legacy Windows 7 Backup and Restore utility.
9.5.8 Backup a Domain Controller
Click one of the buttons to take you to that part of the video.
Backing Up a Domain Controller 00:00-00:41
In this demo we are going to do a backup of our domain controller on a Windows Server. To start, you need to install the Windows backup feature. The first thing we need to do is go to Manage, click Add Roles and Features.
Now we're going to just select Default, Next, Next, past everything until we get to features. Then we're going to scroll down and go to Windows Server Backup. Select it, click next. On the confirm window I can click Install. Then that's going to go ahead and install and that may take a few minutes.
Windows Server Backup 00:41-02:31
Once that's installed, we can go to our tools all the way down, scroll down to Windows Server Backup. Using Windows Server Backup you can schedule regular server backups, or you can schedule a single backup.
In this case, we're going to just choose Backup Once. However, a good disaster recovery program would do regular backups. We'll click backup once since were not scheduling a backup. Since we have not created any scheduled backups before, you must click different options.
Let's go ahead and click Custom. Click Next, we're going to click add items and we're going to select
the System State.
Bare Metal Recovery, will backup all the critical volumes on the computer. It will backup the operating system and all the data volumes if you are doing a bare metal recovery, also sometimes known as an all critical backup, the backup can't be saved on any of the volumes that are being backed up. As well with the system state backup, we can't save the backup on the C drive because that's where the system state exists.
So the system state includes the registry key files, Active Directory and the SYSVOL. We need to have some other place to store this backup. I've connected an external drive to this system currently,
so I'm going to click OK.
Click next and you actually also have the ability to save things to remote share folders. If you have some shared storage on your network, you could actually go save your backup there.
We're going to select a local drive since it's a drive I have plugged into this local machine. It's my F drive labeled Backup. We're going to click Next, and Backup. Okay, so once this backup finishes, which it will take a while, we'll have a good backup of our active directory and the SYSVOL.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
If we need to restore, we can come back to this backup later to restore the main domain controller.
Summary 02:31-02:39
In this demonstration we installed Windows server backup on your Windows Server and then backed
up our domain controller.
9.5.9 Restoring Server Data from Backup
Click one of the buttons to take you to that part of the video.
Restore Server Data from Backup 00:00-00:10
In this demonstration, we'll explore the process of recovering user data on a server.
Use Windows Server Backup 00:10-01:56
In our File Explorer window, let's navigate to the Desktop, where we can find the Shares folder that we previously created.
Now, let's delete this folder, making it disappear. However, imagine that later on, we realize we need
it back. To retrieve it, we'll have to restore it from a backup. To initiate this process, let's go to Tools > Windows Server Backup.
In the Messages Box, we can see the available backup options. Let's click on the one we want to use and review its information. Once we're satisfied, click OK. Next, we'll click on "Recover" and then proceed with "Next."
Here, we can verify our selection to ensure it's the correct backup. Since it's the only one available, let's proceed by clicking "Next."
Note that we have several recovery options. We can recover files or folders, and if this were a Hyper-V Server, we could recover an entire virtual machine if needed. It's also possible to recover entire volumes.
Now, let's talk about applications recovery. If you have backed up a server with installed applications like Exchange or SharePoint, you can restore the entire application without having to reinstall it from scratch. However, you only need to recover the data in this case. This can be useful for transferring an application from one server to another using the backup.
The last option is to recover specific files and folders. Let's click "Next."
So, we'll navigate to the specific folder we want to recover.
After selecting it, click "Next," and then choose to restore it to the original location. Although you have the option to save it elsewhere, we want it back in its original place.
Now, let's create copies of both versions just as a precautionary measure. There are several reasons not to overwrite existing versions.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Recover Files 01:56-02:07
If you're restoring a file that someone claims is missing, and they provide the wrong filename, you could accidentally overwrite a file they intended to keep. This could lead to future problems.
Create Copies 02:07-02:32
It's a wise practice to create copies to avoid data loss or accidental overwrites. In such cases, you'd have to go back and recover even more data. Let's proceed by selecting "Next" and then "Recover."
Now, it confirms that the process is complete. Let's close and minimize the windows. You can now see that the folder has been successfully restored to its original location.
Summary 02:32-02:43
In this demonstration, we walked you through the process of restoring data using Windows Server 2022 Backup, illustrating how to utilize Windows Restore Data effectively.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help