Unit 2 Master's Notes
docx
keyboard_arrow_up
School
Western Governors University *
*We aren’t endorsed by this school
Course
D213
Subject
Mathematics
Date
Jan 9, 2024
Type
docx
Pages
58
Uploaded by GradStudentMorgan
You now should be familiar with the terms
data
and
data literacy
. As you learned in the last module, professionals who are data literate are better equipped to make decisions. However, part of making a decision is being able to identify the problem or problems that are central to making a decision. You will encounter many opportunities to identify a problem in the workplace, and, as someone who must work with data to make decisions, you will soon learn that data can also help you identify associated problems.
Introduction to Problem Identification
Learning Objectives
The candidate describes a problem in an educational setting that can be addressed through data collection and analysis.
All organizations have problems or issues, and instructional and learning organizations are no different. During the next three modules, you will experience a case study in which you will learn about problem identification, data selection and collection, data analysis, and visual representations of data. You will then learn how to leverage data to draw conclusions that can be used to make sound decisions.
In this module, you will also learn how to select data sources through scholarly research to help identify a research problem. You will explore ways to select relevant, credible sources that can lead to an actionable problem statement.
After engaging with this module, you should be able to do the following:
Explain how data can inform instructional problem identification.
Explain how scholarly research can aid in instructional problem identification.
Write a problem statement that can be researched with data.
Describe data source selection strategies.
Data and Instructional Problem Identification
Learning Objectives
The candidate describes a problem in an educational setting that can be addressed through data collection and analysis.
As an educational professional, you will encounter a host of instructional problems. Those problems can come from students, stakeholders, project team members, or others. In this lesson, you will learn how to address issues that many instructors and learning professionals face. By reviewing data such as assessment scores and surveys, you will be better equipped to identify problems that can be further investigated with research.
Sometimes you will work with others as members of a professional learning community. Professional learning communities (PLCs) support collaboration among school personnel to stimulate student learning. The following case study provides details about such collaboration. You will follow the case study throughout the remainder of this course. PLCs are used both in schools and in private sector organizations to encourage collaborative efforts toward making meaningful instructional decisions.
It is your first day as a seventh-grade math teacher at Northern Oak Middle School (NOMS) in the city of Merrilton, which has a
highly affluent population. In the first few minutes of your day, you realize that you are being placed into a professional learning community (PLC) with the principal, school psychologist, special education supervisor, and five other math teachers.
Once you are comfortable in the PLC, you learn that the district math coordinator stated that the math test scores at NOMS have dramatically declined in the past few years. The coordinator suggested that your PLC does more analysis on the data to find who has been impacted and what you can do about it. So far, you know there is an issue with the seventh-grade math scores at NOMS, but you do not know what the specific problem is. The following tells how the PLC identifies the problem.
The PLC members begin with a brainstorming session on what might be the cause. Mr. Sousa suspects that the scores have declined because today's students are less focused than the students from previous years. Ms. Kim expresses wonder about if
the decreasing scores stem from a specific subgroup. Mr. Thomas asks if the PLC can access the standardized test scores for the last few years to learn more about the decline and the time when it started. Mr. Sousa adds that four years ago a new curriculum was adopted and wonders if that could be linked to the problem. Ms. Garcia suggests that a review of teacher evaluations may provide the answer. All the PLC members have great ideas for data. Which other sources of data can you think of that might help solve the problem?
Question 1
This is not a form; we suggest that you use the browse mode and read all parts of the question carefully.
Given the information so far, what could be possible sources of the problem at NOMS?
Instruction and curriculum
Correct! Instruction could be a source of the problem, as the teachers may not be teaching the standards. Curriculum could also be a source of the problem, as it could omit sections relating to certain standards.
Chapter 3: "Examining the Data and Issues"
Read
Chapter 3
in
Data Driven Decision Making: A Handbook for School Leaders.
In this chapter, you will explore the importance of doing a deeper analysis into data to ensure that your research aligns to the problem you are trying to investigate. Throughout the chapter, you will see examples of the questions that a researcher might ask when beginning an investigation and looking at data to address an instructional problem.
"Data Informed Practice"
Read "
Data Informed Practice
" at Association of Independent Schools of New South Wales.
As you read, think about the types of data that are available in schools and other instructional settings. Consider which types of data could help you and your PLC identify the problem at NOMS.
"The Drill Down Process"
Read "
The Drill Down Process
" at the School Superintendents Association.
After reading the article, think about the advantages of looking at data through multiple dimensions (demographics, perceptions, student learning, and school processes). Could this perspective help the PLC members identify the problem at NOMS?
A strategy for asking the right questions and applying the data in useful ways
by Philip A. Streifer
Ithought I had prepared well, having thoroughly reviewed the school district report card published annually by our state education agency. I figured I could anticipate any question my school board might ask about our performance.
All was going well until one board member, who had been a reading specialist prior to retirement, picked up on an interesting combination of facts from among the hundreds of statistics compiled by the state about our schools and students. He observed a disparity between our two elementary schools. One appeared to be spending more time in language arts instruction than the other, and it also performed better on the language arts portion of the state's mastery test.
There it was, right in front of our noses, but we hadn't picked up on it. When asked to comment, all I could say was "We'll get back to you on it."
It was a couple of months more before I reported my findings to the board. The path I followed in figuring out what had occurred was time consuming and not at all apparent. In fact, the "answer" was quite surprising as it turned out that the school spending less time in language arts (and that had attained poorer test results) was actually
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
"overperforming" given the entry language arts level of the school's students. Had I run with just the surface information displayed on the state report, I would have unfairly accused a school and its faculty of poor performance. The "drill-down" process that I engaged in, however, revealed a very different result.
Valuable Comparisons
The drill-down process starts with a global question or issue, which then is broken down into its component parts for analysis. Once the analysis is completed, all the data are considered from one's "helicopter view" to make a reasoned decision on next steps.
In the case of the state report card, the global question was "Is there a relationship between time allocated to learning and performance?" But the real question posed here by the school board was "Should we force the lower-
performing school to add language arts instructional time?"
A good example of how the drill-down process works is one that many school district leaders have experienced in making a major purchase such as a home. On the surface the issue seems straightforward. We want to buy a home, which leads us to ask a series of questions: Where do I want or need to live? What is the cost of homes in one locality versus another? How much can I afford? What are interest rates and are they stable? How much are taxes? How much are the condominium fees?
As these factors are narrowed down, I can begin looking at various properties to make comparisons. Then, once I have selected one or two very good prospects, I can consider making an offer to buy. But how much should I bid given market conditions and potentially rising interest rates? If I bid too low, dragging the purchase process out, I risk losing that amount in rising interest rates over the term of the mortgage. Or I risk losing the home altogether, having then to settle for a higher-priced or same-priced home or a less desirable alternative.
Thus, I need to conduct a cost-benefit risk analysis to continue the process. A final decision or determination requires what I call a helicopter view—an overview and consideration of all the facts that can be gathered and analyzed in a reasonable period of time.
Statistical Differences
In the case of the language arts performance at the two schools, I started by simply verifying all of the information. Next I wanted to know if the schools were somehow using different instructional techniques. I didn't think so, but it was worth checking. Then I reviewed the experience of the staff—perhaps one school had a young, less-experienced
staff. That proved to be a dead end as the staff was well balanced. Then I decided to disaggregate the data by student mobility rates, gender and special education enrollment. No luck there either.
A next logical step was to see if the mastery test scores between schools, as reported by the state, were statistically
different. The score reported by the state was the percentage of students attaining mastery. But I wanted to know if there was a practical difference in scores between the schools.
To find out I entered each student's score into a spreadsheet and ran a T-Test—a powerful statistical technique to determine if group scores are significantly different. The analysis showed no statistical difference between schools. This was surprising. I wondered how these two schools, one with more students attaining mastery than the other, could have no statistical difference between their scores?
The answer, it turns out, is critical and lies in the nature of the measures reported by the state. Percent mastery is a simple "frequency," based on a cut-off score, while the T-Test is a more powerful statistic designed to help one better understand the nature of and the differences in the data.
While discussing all of this at a superintendent's cabinet meeting, one principal remarked that the school with lower test results seemed to have students who had poorer reading skills and suggested that we check their verbal ability.
We had a verbal ability score on these students, but I was cautious about using it for important decisions because of the inherent lack of validity. But given the previous analyses and the fact we had run out of other options, we decided to look at the verbal ability score averages more carefully. We found that students from the lower-
performing school on the mastery test also had significantly lower verbal ability scores.
This trend told me we were onto something. The two schools' mastery test scores were not significantly different (statistically speaking), yet one school had lower verbal ability scores and fewer students reaching mastery level. What did this all mean?
We realized that it was possible for a school to be successful while not having as many students reach an arbitrary cut-off score—the state mastery standard. The school with lower mastery test scores had actually done quite well moving as many students to mastery as they did, all the while working with students of lower verbal ability levels who had spent less time on language arts instruction.
The drill-down process resulted in a significant change from the original assumption or hypothesis. Had we acted on the initial results reported by the state on its annual report card we would have made a serious error. However, by following a logical sequence of questioning we were able to come to a deeper understanding of the issue.
Our response to the board's question could not have been predicted. We showed how the school with lower mastery test results was actually doing very well. What might have begun as criticism turned out to be a tribute and a recommendation to keep up the good work.
Logical Exploration
This example demonstrates that there are two frameworks for guiding the drill-down process: the variety of questions posed and the level of data analysis used.
The questions fall into three categories having to do with (1) disaggregation of data across one or more factors; (2) longitudinal analyses over time; and (3) a category I simply refer to as "exploring." The story of the two elementary schools fits the latter category because it is a discovery process without any clear, predetermined direction to follow.
Logic is the most important attribute of exploration. Buying a home also is an example of an exploration inquiry. The
case discussed in a moment demonstrates the disaggregation and longitudinal drill-down processes.
Statistical power is the second guide for the drill-down process. In the case of the two schools, we started with basic statistics such as frequencies—the simple straightforward display of aggregate information (percent of students reaching mastery and the amount of time allocated to language arts instruction). Then we reached a much deeper understanding of the issue when we looked at the data more deeply using more powerful statistics.
But we can't continue the drill-down process indefinitely as there is a cost-benefit consideration here. Ending the drill-down process and analysis process is a judgment call informed by our helicopter view and the importance of the
issue. Just as we cannot visit every home on the market (or in most cases we can't), the same is true for educational
queries.
Available Data
A colleague recently wanted to know the impact of instructional time spent in a special reading program on achievement at the elementary level. The first step was breaking the problem down into its component parts. Thus, the drill-down questions were these: What was the growth in reading achievement from fall to spring (longitudinal)? What was the impact of time spent in the special reading program to that growth (disaggregation)? What is the impact by school (disaggregation)? Could we identify teachers who were particularly successful to use as models (disaggregation)?
Fortunately in this case, all of the data was in a "data warehouse"—a large database that can hold data for analysis from disparate sources and multiple years. This allows us to do these analyses rather quickly—in several hours instead of the weeks it had taken me in the previous case.
The first drill-down question about the growth in reading achievement from fall to spring was easily addressed by comparing the fall state mastery test results with the locally administered spring reading assessment results (all using the same measure). Looking at simple averages, there was an overall 6.8 point gain from fall to spring. The district considered that significant.
To test this a little further, remembering my experience with the two schools, I decided to run a T-Test and found the
difference between the fall and spring scores to be statistically significant. Now I was satisfied.
The next question was a lot harder to address, even though all the data was in the data warehouse. Identifying the impact of class time spent in the special program on spring scores was not going to be easy. To do this I decided to use a more powerful statistical technique that allows one to handicap or modify one variable based on another.
In this case I would handicap the fall to spring score based on the amount of time a student spent in class weekly. (Fortunately the district had been collecting these data, expressed as the number of hours in special reading instruction each week.) Once the statistics program does the handicapping, it then determines if there is a significant difference between fall and spring scores.
I found a positive impact of the amount of class time spent in the special reading program on the fall-to-spring reading scores. For data junkies, this was a cool finding.
Drilling down further I wanted to know if these results were significant by school and teacher. Indeed they were, but with an even more interesting finding that only a drill-down process could uncover. All of the school and teacher fall-
to-spring averages showed that students had grown by at least the level one would consider significant. That was great news for the district. The final question was whether we could identify any teachers with extraordinary gain rates to use as models. Indeed we did.
Time-Saving Process
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
From a helicopter view it was important to address the fundamental cost-benefit question of whether the special reading program works. While no educational research can be absolutely certain in its conclusions, the district's conclusion was the program was worth keeping.
However, this is time consuming, complex and hard work. And it is often specious to derive meaning from simple scores as was the initial case with the two elementary schools. My colleague who wanted to know if the special reading program was working also suspected that some teachers were not performing well, which turned out not to be true. In both cases, findings were determined after significant drill-down into the data.
The work to gather all the data for the two elementary schools was done by hand and took weeks to complete. The work to assess the reading program was performed in hours because all the needed data resided in a data warehouse. But data warehouses are still rare and most of this work needs to be done by hand.
To address this challenge and ensure a successful school improvement process in a data-driven environment, school
districts might consider the following suggestions:
* Develop data work teams for each school or district with staff members who have expertise in various areas: curriculum, testing, database manipulation and basic research.
* Give these teams adequate time to do their work. Districts might consider compensation as well. To adequately support school improvement work, every member of the school staff needs to be involved in some way.
* Teams should focus on trends in the data over time, not one data point. Schools and schooling are just too complex for single scores to drive solutions. Armed with diagnostic information gained through drill-down analyses, teams should rely on their judgment and expertise in planning program changes.
* Once the team believes it has identified the root cause of a problem (if we are focusing in on problems rather than gains), check the research on what works to solve the problem to avoid reinventing the wheel.
* Build a culture that supports the review and use of data for decision-making by including it regularly in the school improvement process. Change will be incremental—the shift to standards-based learning will have increasingly greater impact on the work of schooling. It is easier to build collaborative relationships before your school system is faced with tough issues. Doing data-driven decision-making only when a serious problem exists can result in an anti-
data culture. Using these techniques regularly to support teacher and administrator decision-making will make for a much more positive and successful approach.
Ms. Johnson recently earned her masters' degree. She states that good research begins with a review of the best practices that experts have used to address a problem. She suggests that the PLC members pair up and find scholarly research and literature that would help the team discover the root cause of the problem and identify potential ways to address it. Before the next meeting, the PLC partnerships are tasked with reading their materials, summarizing them, and reporting back to the team. The team then plans to use the prior research to develop a problem statement. You and Ms. Kim begin your database search using the search terms assessment
and success
. You discover that it yields almost 2,000,000 sources. After discussion, you realize that you both believe that differentiated instruction is a very important tool to promote student learning, so you add differentiated instruction to your search. The top result is now "Assessment and Student Success in a Differentiated Classroom" from
Carol Ann Tomlinson and Tonya R. Moon (2013). These are names that you recognize from prior reading, so you select this text to contribute to the PLC's fact-finding mission. At the follow-up meeting, you report that Tomlinson and Moon (2013) emphasize the importance of pre- and postassessments and the use of assessment data to help with instruction. The team thinks this resource may be helpful once the cause of the declining data scores is better understood, but they are concerned that it does not give a clear direction at this time. Mr. Thomas and Mr. Sousa have brought a summary of Craig A. Mertler's (2007) book Interpreting Standardized Test Scores
. They summarize how this book explains the meaning and development of standardized tests and includes a section on using standardized test scores in instructional decision-making. The team thinks this source sounds quite interesting. Finally, Ms. Garcia and Mr. Kalani summarize Heidi Hayes Jacobs's (1997) book Mapping the Big Picture: Integrating Curriculum and Assessment K–12
, which says that the standardized assessment data must be used to map the curriculum (p. 35). The team realizes that this is a great source
for guiding their decisions. The seventh-grade math team at NOMS has not used assessment data to map the curriculum in its recent history. In the meantime, the district math coordinator comes in with good news. Your team now has access to the entirety of the NOMS standardized test score data for the past five years, which are disaggregated by grade, teacher, and learning strand. Question 1
This is not a form; we suggest that you use the browse mode and read all parts of the question carefully.
After reading the books, the PLC is now aware that all the curriculum should refer to and address the standards. The assessments should test the
knowledge regarding the standards. How does this show progress toward addressing the problem? Based on information in the literature, the problem could be in the curriculum because the PARCC tests the standards and the curriculum addresses each standard.
Correct! The curriculum should give the students the skills they need to be proficient in the standard and thus be proficient in the PARCC.
https://youtu.be/IRCHdhdS_aU
Chapter 1: "Research in Education"
Read pages 1–8
("Research in Education" to the end of "Examples at the Classroom Level") in Conducting Educational Research: Guide to Completing a Major Project
. In this chapter, you will learn about the many ways that educational research can help you identify instructional problems. Throughout the chapter, you will explore a variety of research questions and be challenged to think about the type of research you would use to investigate the problems you identify. "Teaching as a Scholarship"
Read "
Teaching as a Scholarship
" from Galileo Educational Network.
In this resource, the authors stress the importance of adopting a scholarly approach to teaching. Think about how this approach can lead to scholarly research and problem identification. What challenges could you face when seeking scholarly sources you can use to identify a problem and its impact on student learning?
Chapter 2: "Identifying a Research Problem and Question, and Searching Relevant Literature"
Read Chapter 2
in Conducting Educational Research: Guide to Completing a Major Project
. As you read, focus on how the authors identify the problem. Then focus on the current research that addresses learning the history of the problem before determining how to collect the data. Although the context differs from what you and the PLC are trying to accomplish in the scenario, the process for articulating a research problem and searching for relevant and credible extant sources is the same.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
After you carefully review reliable and credible sources about the identified issue, you can articulate a specific problem and ways to address it to improve student learning. This means you are ready to write an actionable problem statement.
In preparing to write your actionable problem statement, it helps to understand what a problem statement is. Bwisa (2018) states, "The ultimate goal . . . is to transform a generalized problem (something that bothers you; a perceived lack) into a
targeted, well-defined problem; one that can be resolved through focused research and careful decision-making" (para. 4). The
inclusion of decision-making makes the actionable problem statement more dynamic. In other words, it is not enough to only identify the problem. With an actionable problem statement, you can capture your intention to make decisions to address the problem.
Once enough data and research have been gathered to determine why an issue is occurring, it is time to write the problem statement.
At the next meeting, the team sits down to write an actionable problem statement to guide their work and determine success criteria.
Question 1
This is not a form; we suggest that you use the browse mode and read all parts of the question carefully.
Using the information from the scenario, which type of data would the PLC need to write the problem statement? (Select two.)
Results of the curriculum analysis
Data reports from the standardized tests
Correct! The standardized test data reports contain a detailed analysis of every student who took the test. The curriculum analysis determines where there are gaps in the curriculum.
The problem statement is directly related to the research topic. In the article “Research Topic to Research Problem” below, by Dr. Mary Murry and Dr. Robert Murray, you learn how a researcher identifies a topic, narrows the topic to a specific concept and develops a defined research problem statement. When developing a problem statement, we identify the problem, the potential impact, and the possible cause. The problem is often grounded in
the researcher’s professional environment and addresses the concern the researcher is investigating. For example; Students in my 7th grade social studies class struggle with understanding government systems. The problem part of the problem statement is specific to the researcher’s professional environment including grade -level and classroom setting with a clear focus within instruction. After identifying the problem, the potential impact is identified. The impact statement directly relates to the problem and provides insight for the reader on why it is important that we address the problem. For example, It is important my 7th grade students demonstrate understanding of government systems because this knowledge is needed to learn more complex concepts of how different societies work.
Finally, the possible cause is identified. When developing the possible cause, the researcher provides a logical explanation based on his or her research on why the problem could exist. It is recommended to try to tie the possible cause to your proposed solution. For example:
Perhaps my 7th grade students are struggling to understand government systems because the methods I use to teach are not meeting their academic needs. The researcher more than likely discovered research and specific pieces of literature that suggested the instructional strategies currently used may not met the student's needs.
Notice the format of the complete problem statement using the examples provided above:
Students in my 7th grade social studies class struggle with understanding government systems. It is important students demonstrate understanding of government systems because this knowledge is needed to learn more complex concepts of how different societies work. Perhaps students are struggling to understand government systems because the methods I use to teach are not meeting their academic needs.
Below are some example phrases one could use when developing a problem statement using the problem-impact-
cause format:
WGU Capstone Guide
You may be asking yourself, what does this all mean for me? As a graduate student who will complete a capstone, you will make a choice. Do you want to pursue Applied Research or Action Research?
This WGU Capstone Guide
provides you with information and guidance about the capstone process. Page 4 of the guide includes potential examples of action research studies and applied research projects to assist you with forward planning of your capstone as you apply content learned in this course to future coursework.
Problem
Impact
Cause
There is a problem with…
Students in the researcher's 5th grade class struggle with…
My students have difficulty…
My students are unable to…
This phenomenon negatively impacts…
This phenomenon impacts the students' learners' ability to ______ by "__________".
Students need to be able to ______ because_____.
This is happening because "___________".
The possible cause of this problem is that the current curriculum lacks . . .
The students or learners have never been trained on...
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
There is more to educational data than students' test scores or results from surveys. Not all data are created equal. Data are relevant if they possess characteristics that can improve teaching and
learning and inform decision-making. In this lesson, you will learn about contextual data, perception data, student well-being data, and student achievement data.
Your data selection strategies are essential to your ability to find and identify the most relevant and credible sources to investigate a problem. Relying on the most efficient and productive strategies will help you manage your time and avoid distractions and unrelated sources.
"Working with Data"
Read "
Working with Data
" at Association of Independent Schools of New South Wales. As you read, consider the ways that data types differ and the characteristics of high-quality data. Also consider the extant, or existing, data that you would need to find the source of the problem that you and the PLC are facing at NOMS. This will support your ability to draft a problem statement.
"Data Selection"
Read "
Data Selection
" from Northern Illinois University.
This article articulates several issues that researchers should consider when selecting data. The author offers guiding questions that will help you
ensure that the data you select will align to the problem you are trying to investigate. Data selection
is defined as the process of determining the appropriate
data type
and
source
, as well as suitable
instruments
to collect data. Data selection precedes the actual practice of data collection. This definition distinguishes data selection from selective data reporting (
selectively
excluding data that is not supportive of a research hypothesis) and interactive/active data selection (using collected data for monitoring activities/events, or conducting
secondary data analyses
). The process of selecting suitable data for a research project can impact data integrity.
The primary objective of data selection is the determination of appropriate data type, source, and instrument(s) that allow investigators to adequately answer research questions. This determination is often discipline-specific and is primarily driven by the nature of the investigation, existing literature, and
accessibility
to necessary data sources.
Integrity issues can arise when the decisions to select ‘appropriate’ data to collect are based primarily on cost and convenience considerations rather than the ability of data to adequately answer research questions. Certainly, cost and convenience are valid factors in the decision-making process. However, researchers should assess to what degree these factors might compromises the integrity of the research endeavor.
Considerations/issues in data selection
There are a number of issues that researchers should be aware of when selecting data. These include determining:
the appropriate type and sources of data which permit investigators to adequately answer the stated research questions,
suitable procedures in order to obtain a
representative sample
the proper instruments to collect data. There should be compatibility between the type
/source
of data and the mechanisms to collect it. It is difficult to extricate the selection of the type/source of data from instruments used to collect the data.
Types/Sources of Data
Depending on the discipline, data types and sources can be represented in a variety of ways. The two primary data types are quantitative (represented as numerical figures - interval and ratio level measurements), and qualitative (text, images, audio/video, etc.). Although scientific disciplines differ in their preference for one type over another, some investigators utilize information from both quantitative and qualitative with the expectation of developing a richer understanding of a targeted phenomenon. Data sources can include field notes, journal, laboratory notes/specimens, or direct observations of humans, animals, plants.
Interactions between data type and source are not infrequent. Researchers collect information from human beings that can be qualitative (ex. observing child rearing practices) or quantitative (recording biochemical markers, anthropometric measurements). Determining appropriate data is discipline-specific and is primarily driven by the nature of the investigation, existing literature, and accessibility to data sources.
Questions that need to addressed when selecting data type and type include:
1.
What is (are) the research question(s)?
2.
What is the scope of the investigation? (This defines the parameters of any study. Selected data should not extend beyond the scope of the study).
3.
What has the literature (previous research) determined to be the most appropriate data to collect?
4.
What type of data should be considered: quantitative, qualitative, or a composite of both?
Methodological Procedures to Obtain a Representative Sample
The goal of sampling is to select a data source that is representative of the entire data
universe
of interest. Depending on discipline, samples can be drawn from human or animal populations, laboratory specimens, observations, or historical documents. Failure to ensure representativeness may introduce
bias
, and thus compromise data integrity.
It is one thing to have a sampling methodology designed for representativeness and yet another thing for the data sample to actually be representative. Thus, data
sample representativeness should be tested and/or verified before use of those data.
Potential biases limit the ability to draw inferences to larger populations. A partial list of biases could include sex, age, race, height, or geographical locale.
A variety of sampling procedures are available to reduce the likelihood of drawing a biased sample, and some of them are listed below:
1.
Simple random sampling
2.
Stratified sampling
3.
Cluster sampling
4.
Systematic sampling
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
These methods of sampling try to ensure the representativeness from the entire population by incorporating an element of ‘
randomness
’ to the selection procedure, and thus a greater ability to generalize findings to the targeted population. These methods contrast sharply with the
‘
convenience
’
sample where little or no attempt is made to ensure representativeness.
Random sampling procedures common in quantitative research contrasts with the predominant type of sampling conducted in qualitative research. Since investigators may be focusing on a small numbers of cases, sampling procedures are often
purposive
or theoretical rather than random. According to Savenye and Robinson (2004), “For the study to be valid, the reader should be able to believe that a representative sample of involved individuals was observed. The “multiple realities” of any cultural context should be represented.
Each strategy has its appropriate application for specific scenarios (the reader is advised to review research methodology textbooks for detailed information on each sampling procedure). Selection bias can occur when failing to implement a selected sampling procedure properly. The resulting non-representative sample may exhibit disproportionate numbers of participants sharing characteristics (ex. race, gender, age, geographic) that could interact with main effect variables (Skodol, Bender, 2003; Robinson, Woerner, Pollak, Lerner, 1996; Maynard, Selker, Beshansky, Griffith, Schmid, Califf, D’Agostino, Laks, Lee, Wagner, 1995; Fourcroy, 1994; Gurwitz, Col, Avorn, 1992). Use of
homogenous samples
in clinical trials may limit the ability of researchers to generalize findings to a broader population (Sharpe, 2002; Dowd, Recker, Heaney, 2000; Johnson, 1990). The issues of sampling procedures apply to both quantitative and qualitative research areas.
Savenye and Robinson (2004) contrast this approach with qualitative researchers’ tendency to interpret results of an investigation or draw conclusions based on specific details of a particular study, rather than in terms of generalizability to other situations and settings. While findings from a case study cannot be generalized,
this data may be used to develop research questions later to be investigated in an experiment (Savenye, Robinson, 2004).
Selection of Proper Instrument
Potential for compromising data integrity also exists in the selection of instruments to measure targeted data. Typically, researchers are familiar with the range of instruments that are conventionally used in a specialized field of study. Challenges occur when researchers fail to keep abreast of critiques of existing instruments or diagnostic tests (Goehring, Perrier, Morabia, 2004; Walter, Irwig, Glasziou, 1999; Khan, Khan, Nwosu, Arnott, Chien, 1999). Furthermore, researchers may be:
unaware of the development of more refined instruments
use instruments that have not been field-tested, calibrated, validated or measured for
reliability
apply instruments to populations for which they were not originally intended
Questions that should be addressed in the selection of instruments include:
1.
How was data collected in the past?
2.
Is (are) the instrument(s) appropriate for the type of data sought?
3.
Will the instrument(s) be adequate to collect all necessary data to the degree needed?
4.
Is the instrument current, properly field-tested, calibrated, validated, and reliable?
5.
Is the instrument appropriate for using in collecting data from a different source than originally intended? Should the instrument be modified?
Attention to the data selection process is crucial in supporting the research steps that follow. Despite efforts to maintain strict adherence to data collection protocols, selection of fitting statistical analyses, accurate data reporting, and an unbiased write-up, scientific findings will have questionable value if the data selection process is flawed.
In this module, you will learn the features, characteristics, benefits, and limitations of different data collection methods. You will also learn how to use technology to collect data and how to determine ethical considerations when collecting data. After engaging with this module, you should be able to do the following:
Identify features and characteristics of data collection methods.
Describe the benefits and limitations of data collection methods.
Determine ethical considerations when collecting data.
Explain how technology can facilitate the collection of data. Once you have identified the problem and selected the types of data you need, it is time to start collecting data. In this lesson, you will learn the different collection methods that are used in educational research, including the features, characteristics, benefits, and limitations
of each data collection method.
Methods of data collection are the techniques for physically obtaining data from various sources; after collecting the data, you analyze it. Data collection and analysis are part of every empirical study. As shown in the graphic at the top of the page, there are six major methods of data collection in empirical research: tests, surveys, questionnaires, interviews, focus groups, observations, and constructed, secondary, or existing data. When you think about a study you may want to conduct, you will need to think about which kind of data you need to collect to answer your research question.
"Data Collection Methods"
Read "
Data Collection Methods
" at JotForm Education.
Researching topics to improve instructional decision-making will, in part, rely on your ability to leverage various data collection
methods. This article explains how to collect data for both qualitative and quantitative research and provides information on popular sources for secondary data.
"The Nature of Data Collection"
Read
pages 47–68
("The Nature of Data Collection"
to the end of
"Portfolios") in
Conducting Research.
This section poses questions that can help you determine which data collection methods to use for your research and how to determine the type of data that you want to collect.
"Sampling Methods and Bias with Surveys: Crash Course Statistics #10"
Watch "
Sampling Methods and Bias with Surveys: Crash Course Statistics #10
" (11:45) from Crash Course.
Researchers must understand the impact of bias on a study and that it can come from the questions, participants, and researcher. This video from CrashCourse explains how to make sure that your sampling methods and data collection tools deliver unbiased results.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In the previous lesson, you learned about a variety of data collection methods that you can use to gather data to address your research question. In this lesson, you will learn about the benefits and limitations of several of those methods. Use what you learn about those benefits and limitations to align the data collection method to your research needs.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Data Collection
Method Characteristics and Features Benefits
Limitations
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Tests
Obtained by having participants fill out an instrument or perform a behavior designed to measure their ability or degree of skill
Efficient to administer
Can accommodate large groups
Easy to score
Objective
Time intensive to develop
May produce anxiety
May not assess critical thinking
Often ignores individual interests
Questionnaires Obtains qualitative data about thoughts, feelings,
attitudes, beliefs, values,
perceptions, personality,
and behavioral intentions through constructed responses
May be anonymous
Accommodates large groups
Cost effective
Low response rates
Difficult to analyze Responses to certain questions may be made in the context of what is socially desirable
Interviews
Impartially collects the data from the interviewee, who provides the data for the
purpose of examining, in
detail, how the interviewee thinks and feels about a topic
Allows for in-depth inquiry
High response rates
Allows for capturing verbal and nonverbal information
Time consuming
Expensive
May be less reliable
Difficult to analyze
Surveys
Obtains quantitative data about thoughts, feelings, attitudes, beliefs, values, perceptions, personality,
May be anonymous
Large groups
Cost effective
Impersonal
May contain bias
Limited response options may
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
and behavioral intentions through selected responses
restrict all possible answers
Low response rates
Observations Obtains information about phenomena by watching behavioral patterns of people in certain situations
May collect sensitive data unobtrusively
Measure overt behaviors
Do not reveal attitudes behind behaviors
Time-consuming
Constructed and secondary (existing or extant) data
Constructed data are information produced by your research participants during the research study.
Secondary (existing) data are information collected, recorded, or left behind at an earlier time, usually by a different person and often for a different purpose.
Time saving
Economical
Provide a basis for comparison of newly collected data
May lack specificity for the relevant task May be difficult to analyze
May be outdated
"Collecting Quantitative (Numerical) Data"
Read pages 50–58
("Collecting Quantitative (Numerical) Data" to the end of "Records") of Chapter 3 in Conducting Research. This section explains how to create and use surveys to collect numerical data. It includes a brief discussion of the common problems associated with surveys and the differences between open-ended and closed-ended surveys. Given the benefits and limitations of surveys, think about whether these data collection tools would be a good option for your research study.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
"Collecting Qualitative (Narrative) Data"
Read pages 60-68
("Collecting Qualitative (Narrative) Data" to the end of
"Portfolios") of Chapter 3 in Conducting Research. In this section, the authors discuss the role observations play in collecting and analyzing data. They also include how to use interviews, focus groups, field notes, journals, and portfolios. All these data collection methods come with limitations that can greatly impact a research study's results.
"4 Data Collection Techniques: Which One's Right for You?"
Read the article "
4 Data Collection Techniques: Which One's Right for You?
" at Humans of Data.
By now you are likely aware that there are many data collection methods. To select the most useful method for your research, read the lists of advantages and disadvantages of four methods (observations, questionnaires, interviews, and focus group discussions).
"Ethical Issues"
Read pages 7–9
("Ethical Issues") in Data Collection: Methods, Ethical Issues and Future Directions. When you are in the data collection phase of research, your actions are assumed to be both transparent and ethical. This chapter describes possible problems in study design and planning, the dangers of coercion and deception, and threats to confidentiality and trust; all of which are barriers to ethical data collection.
"Ethical and Appropriate Data Use Requires Data Literacy"
Read "
Ethical and Appropriate Data Use Requires Data Literacy
" from Phi Delta Kappan
.
As you read, consider what is good practice for collecting and using data responsibly and ethically. How is your organization collecting and using data responsibly and ethically? Do you notice any areas for improvement?
"Henrietta Lacks, the Tuskegee Experiment, and Ethical Data Collection: CrashCourse Statistics #12" (11:24)
Watch "Henrietta Lacks, the Tuskegee Experiment, and Ethical Data Collection: CrashCourse Statistics #12" from Crash Course.
This video discusses the impact of statistical gathering. Because gathering and analyzing statistics can impact people's lives, researchers have a responsibility to gather and use data ethically. As you watch, think of how you can ensure the data you gather and analyze for your study are ethically handled.
Technology is the application of scientific knowledge for practical purposes. When asked to think of technology, people often think of cell phones, computers, and artificial intelligence agents. After all, everyone is a user of technology.
As an educational professional, you can rely on technology to help you collect data as you work toward improving student learning and making sound instructional decisions. This module has already covered various data collection methods, but it is equally important to know how to facilitate the collection method you have chosen. That is where technology applies. You can use spreadsheets like Excel, online survey platforms like Survey Monkey, and other digital collection methods. Of course, some technologies will require purchasing software, but others are free and available on the internet. You are encouraged to search for a variety of tools and read reviews before making a final decision.
"Collecting Data with Google Forms"
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Watch "
Collecting Data with Google Forms
" (4:12) from LinkedIn Learning.
Ever wonder what Google Forms is used for? Google Forms offers the ability to easily create web-based forms that could be useful to a research study. You do not need programming or database experience, and you can automatically collect the results into a Google sheet for review and analysis. Start this LinkedIn Learning course to learn more.
"Online Survey Checklist"
Read "
Online Survey Checklist
" from Technology and Learning
. This article provides helpful tips regarding when it is appropriate to use online surveys, how to effectively design online surveys, and how to select the best tool for your online survey. "When Learning Analytics Violate Student Privacy"
Read "
When Learning Analytics Violate Student Privacy
" from Campus Technology Magazine
.
Learning management systems allow schools and organizations to gather a lot of data about the users. In this article, learn how some universities are working to establish clear and transparent guidelines about how student data can be used and shared. "Exploiting the Full Potential of New Technologies for Data Collection, Monitoring, and Conflict Prevention"
Read "
Exploring the Full Potential of New Technologies for Data Collection, Monitoring, and Conflict Prevention
" at World Bank Blogs.
Because of emerging technologies, there are many methods to collect and monitor data. After reading this article, think about how you could use any of these new technologies to enhance your data collection activities and how you can access these tools.
Why should administrators use surveys?
In the age of data-driven decision making, collecting data and using it efficiently is key to all aspects of the education process. Online surveys are a great way to gather information on everything from lunchroom procedures to program design. This information can impact solutions for student achievement, community relations, and district management. Consider the following:
Evaluation: The nature of NCLB is driving schools to reevaluate comprehensive plans. Collecting information regarding opinions and beliefs allows administrators to harness the voice of many while evaluating areas for development.
School-community connections: Understanding parents, teachers, students, and community members on budgetary needs, school safety, and even transportation procedures is essential to the success of school-to-home connections. Demonstrating the value of community by asking questions, listening, and taking action builds trust.
Respectful decision making: Administrators are faced with making many difficult decisions. Having hard data at your fingertips eliminates the need for guesswork and basing actions on opinions — which can create emotional situations and impede real progress.
How to design questions
Careful question design is one of many considerations. The following is a list of items to contemplate when designing surveys:
Establish goals: Begin with defined objectives. For instance, if the goal is to gather information on support for a building project, focus on questions that align to perception of need such as class size, condition of existing facilities, and viewpoints regarding the impact of projected demographic growth.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Define audience: A clear picture of the population addressed is necessary. If opinions of community members that do NOT have students currently enrolled are desired, focus accordingly.
Anonymity: Decide whether it is pertinent to allow for anonymity of respondents.
Design questions worth asking: Be sure questions are neutrally worded to avoid bias, concise, and easy to understand. Keep each question focused on a single topic. Consider using third-person wording. Someone may be more honest about their own feelings when asked, "How do your colleagues feel about…" rather than "How do you feel about…"
Do not ask questions you are not prepared to have answered: It is inappropriate to disregard data simply because results may not correlate with original hypotheses. When all factors that could have affected survey results have been exhausted (broadness of sample, poorly worded questions) the data left is telling.
Consider scale and question types: Eliminating the "middle" in scaled responses ("3" in 1-5 scales) forces someone to "take a side." Surveys are designed for feedback, so consider whether a "no opinion" choice is valid to the process. Is there need for open-ended responses? Also, there is a difference between: "Do you want a new high school, yes or no?" and, "Rate community perception of need regarding facility expansion on a 1-4 scale."
Group themes and types of responses: Theme-clustered questions create clarity in the response process. Like-response types such as a Likert scale or multiple-choice should be grouped for consistency.
Keep surveys short: Provide directions with an amount of time to take the survey that has been tested, not estimated. A parent might give five to 10 minutes to feedback. A faculty member allotted assigned time may focus for 20 to 25 minutes.
Trial run: Use test groups for feedback. Examine sample data to be sure results align with objectives. Be prepared to redesign.
Getting the word out: Newsletters, listservs, Web sites, flyers, calendars, newspapers, radio, and Board of Education meetings can all be used for publicity.
Time frame: Make sure the time frame is long enough to reach your audience, but not long enough to allow procrastination. Faculty-
level surveys may take two business days, while community surveys might be available between two Board of Education meetings.
Application of results: Quickly analyze results. When constituents are asked for input, expectations are that the input will be applied. Knowing how to use data is as important as collecting it. Training staff regarding the use of data is time and money well spent.
Presentation: Take advantage of charts, graphs, and reporting features to create professional materials.
Directions: Make no assumptions that the audience has the skills to effectively participate. Provide clear directions outlining goals, time required, and other pertinent information.
Choosing the right tool
There is a multitude of choices when it comes to selecting technology-enhanced survey tools. The same attention should be given to selecting the right survey product as to evaluating software or textbook purchases. The following are considerations to be aware of when seeking the best
survey or remote-response tool for your school or district.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Question formats: Multiple-choice, open-ended, dual-scale, and various other question types are available. Not all tools offer the same question variety or flexibility.
Branching options: "If you answered no, skip to question 10." Some tools will automatically skip to a question based on the response.
Question banks: Sample or template question banks are sometimes available.
Required responses: Can questions be marked as required before a respondent may advance?
Distribution: Some tools offer features that distribute surveys and track responses.
Visual interface: What are the graphic design features of the program?
Technical: What are the related hardware or software requirements?
Results and data storage: How long is survey data stored, in what way are results presented, who owns it, and what are the graphing and import/export features? Downloads range from universal formats such as .CSV to proprietary databases. How easily can information
be imported to presentation formats?
E-mail: Are there e-mail options to send results to system managers automatically? If respondents provide e-mails, are features available
to quickly use this information? Is there an auto-reply feature?
Passwords: Determine if password protection of survey access is needed and available.
Accounts: Can a user finish an incomplete survey at a later time?
User friendliness: How much training is needed? Features such as wizards and tutorials may be included. What skills will the audience need to respond?
Price: Many online survey tools have free or trial accounts. Services structure pricing by the survey, number of questions, number of responses, or by monthly/yearly fees. How many user accounts are being purchased? With handhelds, look for software licensing, number of units, features, warranty and support, and hardware cost. Software should allow for open-ended questions without needing "correct responses." Examine recurring costs.
Support: Phone, online, e-mail, and chat are all forms of technical assistance. Determine what is offered. During what times (and in what
time zones) is support available?
Concerns
Careful planning is crucial. Listed are points to consider.
Access to technology: Does the audience have access to the technology required? Make computers in schools available outside business
hours. Making arrangements with local libraries and community centers can also be a solution.
Anonymity: Even if it is stated in advance that the survey is anonymous, some people believe they can be identified by the "technology."
Education on how it all works will ease this concern.
Multiple responses: Various tools address controlling response numbers from specific people or workstations, eliminating the worry regarding duplicate submissions.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Junk mail: Distributing the link to a survey via e-mail could result in messages being deleted as junk or spam if the receiver does not recognize the sender. Consider sending correspondence from the out-box of a superintendent or principal.
Wrapping it up:
Access to survey design and electronic implementation tools greatly enhances the data-driven decision-making process. Administrators who take
advantage of these systems find them invaluable. We have witnessed the success of many district-based projects, and our own programs have been forever impacted.
"We now have substantial data to work with. We can analyze and disaggregate the data with ease, which will lead us to conclusions and recommendations based on teacher need and input. The results will help shape the future direction of professional development in Arlington. This is an exciting time!"
— Dr. Christine Lowden, director of professional development and program evaluation for the Arlington, NY School District, on her district's use of online surveys.
Collecting perception data in hard-to-reach areas and fragile contexts can be extremely challenging, but is necessary to better understand who is
excluded, who feels excluded, and to measure horizontal inequalities. Doing so requires the use of innovative methodologies. In particular, technology is a valuable tool with which to access remote and conflict-affected areas where exclusion is likely to be the worst.
Using technology to collect data
The world has become increasingly digital with time—technology has now pervaded our daily lives. Given this, why shouldn’t technology play a key role in areas such as risk monitoring, data collection, or conflict prevention? There are several useful data collection methods now available to practitioners:
Mobile data collection is the use of digital devices such as mobile phones, tablets, or laptops for data collection.
Crowdsourcing and crowdseeding are real-time data collection methods that involve different technologies. Information is directly obtained from technology users who volunteer their own data (crowdsourcing), or from trained informants in the field (crowdseeding). The most-well known example of crowdsourcing is Ushahidi, an open-source software program for collecting information and undertaking interactive mapping, first used after the 2007 presidential election in Kenya.
Social media monitoring extracts information from social media networks such as Twitter, Facebook, Google Plus, and so on.
Geospatial technology refers to global positioning systems (GPS), geographical information systems (GIS), and remote sensing (RS). These tools can be used to do a geo-located mapping of violent incidents, for example.
To determine how technology can be used for monitoring, some key elements need to be taken into account: (i) Is there a reliable source of electricity? (ii) How good is mobile and Internet connectivity in the area? (iii) What are the current trends in technology use? and (iv) What languages are needed to access the targeted population, and what is the literacy level?
How can we realize the full potential of new technologies?
Technology has a huge potential that has yet to be fully explored. Some researchers even talk of “
Big Data for Conflict Prevention
,” which alludes
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
to “the full potential use of Big Data to support conflict prevention efforts.” In recent years, technology has played an increasing role in conflict prevention, but is not yet harnessed to its full and promising potential.
Information and communication technologies (ICT) and the data they generate can support efforts to prevent crisis and tackle causes of violence. ICT can help researchers collect quantitative and qualitative data more frequently in insecure or remote areas, through the use of digital surveys, SMS-administered polling, geo-spatial mapping, photographs, videos, and satellite imagery. These data provide key information about the drivers and warning signs of violence to support conflict-prevention approaches.
How can Big Data prevent conflict? By providing valuable information about individuals and communities in areas—even remote ones—where data are often unavailable, in near real time, with a good level of precision. If we assume that individuals and communities change their behavior
when violent events occur, then Big Data can capture these changes. For example, satellite imagery can be used to detect mass movements. An analysis of tweets can help detect growing tensions, frustration, and calls for violence. For instance, before the 2013 presidential elections in Kenya, a search for hate speech
was conducted in social media to identify early signs of violence.
Concrete examples, real success
In Sudan, the Crisis and Recovery Mapping and Analysis (CRMA) project undertook participatory mapping of threats and risks in 6 states of Sudan
and 10 states of South Sudan. For that purpose, UNDP developed a GIS-enabled desktop database tool. During the 2015 election in Nigeria, Patrick Meier and his team used Artificial Intelligence for Monitoring Elections
(AIME), a free and open source (experimental) solution that combines crowdsourcing with artificial intelligence to automatically identify tweets of interest during major elections. Crowdsourcing systems have the potential to be used for early warning, if the system is designed to produce frequent, consistent, and complete data. Cell phones can also be used in conflict-affected areas because of all the different data they passively generate each time a cell phone is used.
Technology has also changed the way people respond to crisis. Following the 2010 Haiti earthquake, for the first time, thousands of people volunteered online to support rescue operations. This gave rise to the Digital Humanitarians
project which, through crowdsourcing, created a digital crisis map that showed the real-time evolution of the situation on the ground.
Technologies can break new ground in terms of conflict prevention, risk monitoring, and data collection. It is now up to practitioners to seize the opportunities these new tools offer.
Now that you have learned how to gather data for your research, it is time to focus on how to analyze and make meaning from those data. In this module, you will learn about the methods used to analyze both quantitative and qualitative data. Included in the module is how to apply descriptive and inferential statistics to conduct data analysis. This module also reinforces the importance of ensuring the reliability and validity (or accuracy) of your data. It ends with an overview of the use of technology in facilitating data analysis.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
After engaging with this module, you should be able to do the following:
Differentiate among variables and measurement scales.
Examine methods to analyze quantitative data.
Describe how to use descriptive statistics to analyze data.
Describe how to use inferential statistics to analyze data.
Examine methods to analyze qualitative data.
Apply principles of validity and reliability to data analysis.
Explain how technology can facilitate data analysis. In this lesson, you will learn about dependent variable and independent variables as well as constants. This lesson also covers the four measurement scales that are commonly used in qualitative and quantitative analysis: nominal, ordinal, interval, and ratio. These scales determine how certain variables are defined and categorized. It is important to understand the distinguishing characteristics or properties for each of these scales so you can apply the correct data analysis technique. Variables and Measurement Scales in Quantitative Research
A variable is defined as a condition or characteristic that can take on different values or categories such as age, grade point average, and test scores. To better understand the concept of a variable, it is helpful to compare it with a constant, its opposite. A constant is something that does not change, but rather is a single value. A single value or category of a variable can be constant in a study.
For example, consider the variable hair color. Hair color can be the values of blonde, brunette, red, and others. In a study, you can use one level of hair color (brunette) and hold it constant. In that case, brunette is a constant in the study. In another example, as in the case of the variable age, all the ages compose the values of the variable, and you could do a study where an individual value (e.g., 13 years old) is held constant. Essentially, a variable is like a set of things. When you pick one level or value of a variable and do not change it, it is said to be a constant.
"Independent and Dependent Variables"
Watch "
Independent and Dependent Variables
" (3:25) from LinkedIn Learning.
This video discusses variables as data sets or a measurement of an item consisting of different values. As you watch, consider the types of variables you will need when you conduct research to make instructional decisions.
"Scientific Variables"
Watch "
Scientific Variables
" (1:28) from Christopher Brunson. In this animated video, you will learn the difference between independent variables, dependent variables, and controls. Each explanation includes simple examples to enhance your understanding.
Chapter 1: "Data Organization"
Read Chapter 1
in Analyzing Quantitative Data: An Introduction for Social Researchers.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Chapter 1 explores the role of variables in research and the difference between independent and dependent variables. The author suggests that researchers are likely aware of their study's variables before collecting data. After you read this chapter, try to think of variables that may influence your strategies to improve student learning.
Now that you understand the role that variables play in research, move on to measuring data by relying on measurement scales. "Measurement is the process of recording observations collected as part of research. Scaling, on the other hand, is the assignment of objects to numbers or semantics" (Formplus, n.d., para. 2). In other words, you can record what you observe and assign numbers or words to represent those observations. To design your research plan, you will need to know how to work with measurement scales.
"The measurement scales are used to measure qualitative and quantitative data . . . nominal and ordinal scale being used to measure qualitative data while interval and ratio scales are used to measure quantitative data" (Formplus, n.d., para. 6). It is important to understand the distinguishing characteristics or properties for each of these scales so the correct data analysis technique may be applied. "Understand Levels of Measurement in Statistics (NOIR): A Tidy Review"
Watch "
Understand Levels of Measurement in Statistics (NOIR): A Tidy Review
" (6:42) from Research by Design to learn about the four levels of measurement. This video provides examples for each scale and explains why each is used to measure specific data.
Chapter 10: "Identifying and Describing Procedures for Observation and Measurement"
Read pages 153–154
("Types of Measurement Scales" to the end of
"Scale Conversation") from Chapter 10 in Conducting Educational Research. This section details the four measurement scales and describes how they can be used to measure a particular variable or assign numerical scores
to the variable.
Chapter 4: "Quantitative Statistical Analysis and Interpretation"
Read pages 80–87
("Descriptive Statistics" to the end of
"Measures of Variability") from Chapter 4 in Conducting Research
.
In this section, the author explains several methods for enhancing the credibility of quantitative research methods. As you read, be sure to note how you can be prepared to address each of these methods when conducting quantitative research so you can enhance learning outcomes.
Data are analyzed using different methods depending on what it is you wish to analyze. In this lesson, you will be introduced to quantitative data
analysis using descriptive statistics. Descriptive statistics is useful for researchers who are faced with hard-to-understand quantitative data. This lesson includes a breakdown of descriptive statistics into measures of central tendency and measures of variability, mean, median, mode and range.
"What Is Descriptive Statistics?"
Read "
What Is Descriptive Statistics?
" at Medium.
In this brief article, the author offers a definition of descriptive statistics and ways to measure it. He also explains skewness and measures of variability and how they help researchers analyze how spread out the distribution is for a set of data.
"Mean, Median, and Mode: Measures of Central Tendency: Crash Course Statistics #3"
Watch "
Mean, Median, and Mode: Measures of Central Tendency: Crash Course Statistics #3
" (11:22) from CrashCourse.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
This video explains mean, median, and mode as used to measure the most common patterns of an analyzed data set. These measurements are a
major component of descriptive statistics.
Chapter 3: "Descriptive Statistics for Continuous Data"
Read pages 70–73
("Measures of Central Tendency" to the end of
"Maximum and Minimum") from Chapter 3 in Analyzing Quantitative Data. In this chapter, the author explains how to compute mean, mode, and median to measure central tendency and how to choose which measurement to use.
"Test Statistics: Crash Course Statistics #26"
Watch "
Test Statistics: Crash Course Statistics #26
" (12:49) from CrashCourse.
Learn how to distinguish between random and real differences among data points through the use of test statistics. The video explains variance and how it impacts research conclusions.
You have explored descriptive statistics and how it can impact a quantitative research study. Now focus inferential statistics, which is also useful in education research. "Descriptive statistics describes data (for example, a chart or graph) and inferential statistics allows you to make predictions ('inferences') from that data. With inferential statistics, you take data from samples and make generalizations about a population" (Glenn, n.d., para. 1).
"A Concise Guide to Inferential Statistics"
Read "
A Concise Guide to Inferential Statistics
" from Synergy: Imaging & Therapy Practice
.
The authors of this article discuss ways inferential statistics can be used in radiography practice and research. After you read the article, think of ways you could use inferential statistics in your education research plans.
"How P-Values Help Us Test Hypotheses: Crash Course Statistics #21"
Watch "
How P-Values Help Us Test Hypotheses: Crash Course Statistics #21
" (11:52) from CrashCourse.
This video explains the use of inferential statistics as a way of describing data that you may already have in order to make inferences about data you do not have.
Chi-Square Tests: Crash Course Statistics #29"
Watch "
Chi-Square Tests: Crash Course Statistics #29
" (11:03) from CrashCourse.
Learn how to use a chi-square model to measure categorical variables. As you watch the video, think of the types of variables you may need to measure to make decisions about changes to your instructional strategies. Chapter 9: "Analyzing and Interpreting Experimental Research"
Read pages 207–212
("Selecting a Statistic Consistent with Your Design and Appropriate to Your Data") of Chapter 9 in Conducting Educational Research. This section details the application of inferential statistics to specific research examples. The author states that inferential statistics helps you make inferential comparisons to determine the likelihood that a hypothesis is true. Think about how inferential statistics could be used to help you or other educational researchers address barriers to student learning.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In this lesson, you will learn about methods to use for qualitative data analysis. This will be especially useful as you explore data relevant to the instructional strategies that you and possibly others in your school are trying to improve. Remember, qualitative data describe how people think and feel and are useful when making important decisions that impact specific populations.
"Thematic Analysis of Qualitative User Research Data"
Watch "
Thematic Analysis of Qualitative User Research Data
" (3:01) from NNgroup.
Watch this video to learn how to organize data into themes, which can enhance your qualitative research efforts.
Chapter 5: "Action Research and Qualitative Analysis"
Read pages 105–108
("Action Research and Qualitative Analysis") of Chapter 5 in Conducting Research. As an educational researcher, you are often seeking data to make informed decisions regarding student learning. This chapter provides detailed information on methods you can use to interpret data to make decisions.
Chapter 10: "Analyzing and Interpreting Qualitative Data"
Read Chapter 10
in Conducting Educational Research.
Read “Constructing a Scale” through "Constructing an Observational Recording Device” in Chapter 10 in Conducting Educational Research.
This chapter covers information that can help improve your qualitative research study.
Chapter 4: "A Survey of Qualitative Data Analytic Methods"
Read Chapter 4
in Fundamentals of Qualitative Research
.
Read Chapter 4 to gain insight on qualitative data analysis methods, including looking at the way patterns, categories, and reasoning shape findings.
"What Is In-Vivo Coding in Qualitative Analysis?"
Watch "
What Is In-Vivo Coding in Qualitative Analysis?
" (6:05) from Quirkos. In addition to walking through basic strategies for coding qualitative data, you will learn how to use excel or other qualitative software tools such
as Quirkos or NVivo that can facilitate the analysis of your data.
Earlier in this course, you learned to differentiate between validity and reliability. In this lesson, you will learn how to apply these principles to your data analysis method. Researchers must be confident that the data they are working with are both valid and reliable. Look at methods you can use to ensure transparency and trustworthiness.
"4.2 Reliability and Validity of Measurement"
Read "
4.2 Reliability and Validity of Measurement
" at Pressbooks.
This section defines reliability and discusses the different types of reliability and the ways they are assessed. It also covers definitions of validity and describes the relevant evidence needed to assess the reliability and validity of a particular measure.
"Reliability and Validity"
Watch "
Reliability and Validity
" (1:33).
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
This video explains how to determine reliability and validity in assessments. It covers test-retest reliability, internal consistency, and alternate form reliability. While watching the video, imagine the types of assessments you will need for your study and the ways you can use this information to ensure the most accurate results.
Chapter 3: "Sampling and Data Collection Methods"
Read pages 115–118
("Validity and Reliability in Qualitative Research") of Chapter 3 in Conducting Research. This section explains the difference between validity and reliability in qualitative research methods. The author discusses the human senses and subjectivity related to qualitative data then explores trustworthiness as a means to strengthening the legitimacy of qualitative research.
"Understand Research Significance"
Watch "
Understand Research Significance
" (2:52) from LinkedIn Learning.
In this video, you will learn how to determine whether your results are statistically significant and understand that data analysis outcomes are not a result of chance. You may recognize that statistical significance does not always equate to research validity.
Chapter 4: "Quantitative Statistical Analysis and Interpretation"
Read pages 75–79
("Threats to Validity") of Chapter 4 in Conducting Research
. Threats to the validity in a research study include issues that adversely impact the validity of the data. This section explains how to identify and control for threats.
Chapter 10: "Identifying and Describing Procedures for Observation and Measurement"
Read pages 206–210
("Test Reliability" to the end of
"Content Validity") of Chapter 10 in Conducting Educational Research.
These selected pages outline ways to ensure that the assessments used are both valid and reliable. Included are several methods you can use to determine an assessment's validity and reliability.
1.
Define reliability, including the different types and how they are assessed.
2.
Define validity, including the different types and how they are assessed.
3.
Describe the kinds of evidence that would be relevant to assessing the reliability and validity of a particular measure.
Again, measurement involves assigning scores to individuals so that they represent some characteristic of the individuals. But how do researchers know that the scores actually represent the characteristic, especially when it is a construct like intelligence, self-esteem, depression, or working memory capacity? The answer is that they conduct research using the measure to confirm that the scores make sense based on their understanding of the construct being measured. This is an extremely important point. Psychologists do not simply assume
that their measures work. Instead, they collect data to demonstrate
that they work. If their research does not demonstrate that a measure works, they stop using it.
As an informal example, imagine that you have been dieting for a month. Your clothes seem to be fitting more loosely, and several friends have asked if you have lost weight. If at this point your bathroom scale indicated that you had lost 10 pounds, this would make sense and you would continue to use the scale. But if it indicated that you had gained 10 pounds, you would rightly conclude that it was broken and either fix it or get rid of it. In evaluating a measurement method, psychologists consider two general dimensions: reliability and validity.
RELIABILITY
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Reliability
refers to the consistency of a measure. Psychologists consider three types of consistency: over time (test-retest reliability), across items (internal consistency), and across different researchers (inter-rater reliability).
Test-Retest Reliability
When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability
is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.
Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the same
group of people at a later time, and then looking at test-retest correlation
between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing the correlation coefficient. Figure 4.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. The correlation coefficient for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.
Figure 4.2 Test-Retest Correlation Between Two Sets of Scores of Several College Students on the Rosenberg Self-Esteem Scale, Given Two Times
a Week Apart
Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.
Internal Consistency
Another kind of reliability is internal consistency
, which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking.
This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.
Like test-retest reliability, internal consistency can only be assessed by collecting and analyzing data. One approach is to look at a split-
half correlation
. This involves splitting the items into two sets, such as the first and second halves of the items or the even- and odd-numbered items. Then a score is computed for each set of items, and the relationship between the two sets of scores is examined. For example, Figure 4.3 shows the split-half correlation between several university students’ scores on the even-numbered items and their scores on the odd-numbered
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
items of the Rosenberg Self-Esteem Scale. The correlation coefficient for these data is +.88. A split-half correlation of +.80 or greater is generally considered good internal consistency.
The Open Source Roundup
Read "
The Open Source Roundup
" from Online Searcher
.
Sometimes researchers who are curious about data analysis technologies do not know where to begin searching for resources. In this article, the authors share some of their favorite tools, apps, platforms, and systems, which can be used to collect, analyze, and publish data.
"Conclusion"
Read pages 23–28
("Conclusion") in From the Past into the Future: How Technological Developments Change Our Ways of Data Collection, Transcription and Analysis.
In this study, the author explores whether qualitative data analysis (QDA) software enables users to skip transcribing data (audio files and video files). Section 8 details the author's reflection of the different developments mentioned earlier in the article to understand what they mean for data collection and analysis procedures. The author ends with some considerations on how technological advancements could further help the research community.
"Choosing the Right Statistical Software for Data Analysis"
Read pages 18–22
in Quantitative Data Analysis: Choosing Between SPSS, PLS and AMOS in Social Science Research
.
Learn more about how to choose the best statistical data software for data analysis. This article discusses the importance of researchers first looking at the research objective to make an informed decision regarding choosing software.
Reading for Enrichment
Software for Quantitative Data Analysis: "Toolkit Part II. Excel Analysis Tool"
A wealth of technology tools can help you analyze your data. To learn more, explore the following resources to help you learn more about how to leverage technology for your data analysis. Read pages 14–16
from A Teacher's Toolkit for Collecting and Analyzing Data on Instructional Strategies
.
This tool kit, developed by Regional Educational Laboratory (REL), provides a process and various tools to help teachers use data from their classroom assessments to evaluate best practices for the classroom. These pages of the tool kit explain how to import data into Excel to make analyzing the data easier. "Software for Qualitative Analysis"
Read pages 278–280
("Software for Qualitative Analysis") in Research Methods in Information.
Several software tools can support you in coding your qualitative data. NVivo is one such tool and is popular in many academic institutions. Learn
how software tools can facilitate your analysis of data. "DIY Data Dashboards in Google Sheets"
Read "
DIY Data Dashboards in Google Sheets
" from Computers in Libraries
.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Learn how to use Google Sheets to organize and analyze data to better understand your research sample.
COMMON STATISTICAL SOFTWARE Several statistical software are available for performing statistical analysis, namely Statistical Package for Social Sciences (i.e. SPSS), Minitab, SAS, R-programming, STATA, SEM-AMOS, SEM-SmartPLS, and WarpPLS. The most popular softwares for SEM are Analysis of Moments Structure (AMOS), Partial Least Square (PLS), LISREL, SEPATH, PRELIS, SIMPLIS, MPLUS, EQS and SAS (Hair et al., 2011; Zainudin, 2012a, 2012b). In general, there are two types of SEM (Lowry & Gaskin, 2014): Variance-based SEM, such as PLS and Co-variance based SEM, such as AMOS, Lisrel, EQS, MPlus. However, this paper only focuses on three statistical software packages commonly used in social sciences research, which are SPSS, AMOS and SmartPLS. 3.1 Statistical Package for Social Sciences (i.e. SPSS) SPSS is a statistical package designed by the IBM Corporation and widely used by researchers or academicians worldwide. This statistical package is very user friendly and various statistical tests could be conducted using this software. This statistical software undertakes both comparison and correlational statistical tests in the context of univariate, bivariate and multivariate analysis for both the parametric and non-parametric statistical techniques. 3.2 SmartPLS SmartPLS is a statistical package primarily designed by a team of software developers from the academia in Germany (Ringle et al., 2015). This statistical software undertakes SEM analysis using the Ordinary Least Square estimation techniques (Hair et al., 2011; Ringle et al., 2013; Hair et al., 2014), and is widely used by researchers exploring the theories. 3.3 AMOS AMOS is a statistical package also designed by the team at IBM Corporation. AMOS software is widely used to confirm a theory, since it uses the ML estimation techniques in the SEM analysis (Byrne, 2010; Hair et al., 2010). Besides that, AMOS software is automatically available when the researcher purchases the SPSS software version 20.0 and above.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
4.0 CHOOSING THE RIGHT STATISTICAL SOFTWARE FOR DATA ANALYSIS In choosing the right statistical software for performing the data analysis, firstly, researchers usually look at their research objective. If the research objective is comparison analysis, usually SPSS statistical software is the preferred statistical package compared to other statistical packages such as MINITAB, STATA, and R-
Programming statistical software. This is because the SPSS statistical software is easily able to perform both parametric and non-parametric comparison analysis. It also permits the researcher to check the assumptions of the tests, such as the normality test and outliers test. Besides that, this statistical package enables a frequency analysis to be perfectly conducted. On the other hand, in the context of validating the variable items, if the researcher intends to refine the variable items using the EFA analysis, the SPSS statistical package is an appropriate statistical package, since it provides comprehensive output compared to other statistical software. The software also performs EFA analysis by using a number of extraction estimation techniques such as Principal Component extraction technique, Principal Axis Factoring extraction technique and Maximum Likelihood estimation technique. With respect to correlation analysis objectives, the SPSS statistical software could easily perform the Pearson’s Correlation or Spearman’s Rank Correlation tests for examining the bivariate relationship between two targeted variables. It could be used to carry out the MLR analysis with the organized output of regression analysis. In terms of categorical type of dependent variable, the researcher usually has a choice of either to perform Logistic Regression analysis, or Multinomial Regression analysis or Discriminant analysis. Therefore, SPSS statistical software is considered as an optimal statistical tool for performing these three types of statistical analysis. However, if the researcher intends to examine causal and effect relationship between a number of independent and dependent variables, SEM analysis is the preferred statistical tool. Of late, SEM is becoming a popular method of analysis for studying relationships among constructs. SEM has the statistical ability to test the causal relationships between constructs with multiple measurement items (Hair, Ringle, & Sarstedt, 2011; Hair, Sarstedt, Pieper, & Ringle, 2012; Lowry & Gaskin, 2014; Noorazah & Juhana, 2012). SEM is an useful statistical tool for testing theories and conceptual models of the study empirically (Hair et al., 2011; Hair et al., 2012). Using SEM allows the researcher to determine whether the relationship among the constructs in the research framework is significant, based on the data gathered. SEM is a second-
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
generation of multivariate analysis techniques which combines various techniques available in the first-
generation of multivariate analysis (or known as OLS – Ordinary Least Squares), such as factor analysis (FA), regression and correlation (Hair et al., 2012; Lowry & Gaskin, 2014; Zainudin, 2012b). PLS-SEM is used for data analysis to test the measurements and substantive models of the study and examine the relationships among constructs in the proposed research model. The proposed model is the nascent theoretical development derived from several theories. Thus, the prediction between constructs in the proposed model requires usage of PLS-SEM (Hair et al., 2011; Hair et al., 2012). PLS-SEM is very powerful to test the theory as compared to CB based SEM (Lowry & Gaskin, 2014). Furthermore, PLS-SEM is also employed as the complex research model is handled in a more effective and efficient manner, and it requires no GOF (goodness of fit) model which is central in CB-SEM. PLS-SEM is also frequently used in exploratory studies. Further, PLS SEM offers flexibility in terms of data analysis as it has the ability to process different types of nominal, ordinal, interval and ratio data (Hair et al., 2011; Hair et al., 2012). Moreover, the current research trends are also moving towards using PLS-SEM as a software to analyse quantitative data (Henseler, Ringle, & Sarstedt, 2015). The other advantages of using PLS-SEM, as outlined by Hair et al. (2011) and Hair et al. (2012), are the restrictive assumptions of the CB-SEM (co-variance based SEM), that is, the normality assumption is not met, the sample size is small, some of the variables are formative measures, and the focus of the study is on prediction and theoretical development. Hair et.al also maintained that although PLS-SEM operates using a small sample size, it is preferable to use a larger sample size to represent the population and yield more accurate results of model estimation. Another salient advantage of PLS-SEM is that, it is able to normalise the data for further analysis. In general, there are several reasons why SEM PLS is used for data analysis. Among others, PLS-SEM is suitable for theory testing, it is more robust than traditional SPSS, it allows researchers to test all variables simultaneously, and the sample is more flexible as it does not require normality assumptions to be fulfilled and it works well with small sample size. Although some scholars argue that PLS-SEM is less rigourous, its usage has gained popularity, especially in business research. This is due to the unique features of PLS-SEM, namely PLS ability to handle smaller sample size as well as producing more robust and accurate results than CB-SEM if the assumptions of CB-SEM are not met. It is also a preferred statistical method if the nature of the research is more predictive than a confirmatory kind of study (Hair et al., 2011). Though the small
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
sample size in PLS-SEM is said to have biasness against consistency, however differences in estimation results between the two are very minimal. If the sample sizes are bigger, then the results produced by PLS-
SEM are similar to the results produced by CB-SEM (Lowry & Gaskin, 2014). Besides that, both statistical software for conducting the SEM analysis (i.e. AMOS and SmartPLS) have user friendly features and the outputs are clearly presented. The key distinctive features between CB-SEM and PLS-SEM, as highlighted by Hair et al. (2011) and Hair et al. (2012), are shown in Error! Reference source not found. below. Table 1: Key Features of CB-SEM and PLS-SEM CB-SEM PLS-SEM
Theory testing and confirmation
Theory prediction and development
Requires large sample size
Able to operate with small sample size
Normality assumptions must be met (restrictive assumptions)
Normality assumptions need not be met (less restrictive assumptions)
Data are continuous (reflective)
Data could be formative
Confimatory study
Exploratory study Source: Hair et al. (2011) ; Hair et al. (2012) In addition to the above, PLS-SEM allows for more complex analysis for modeling latent variables, testing the indirect effect, multiple moderation effects and assessing the goodness of the proposed model (Lowry & Gaskin, 2014). Table 2 below summarises the statistical software and statistical analysis commonly used in the Social Science research field. Table 2: Common Statistical Analysis Used in the Social Science Research Field Research Objective Type of
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Statistical Theory Possible Method Suggested Statistical Software To examine the significant differences between two interested groups towards one continuous targeted variable Univariate Comparison analysis Independent t-test analysis SPSS Mann-Whitney test analysis SPSS To measure the significant differences among more than two comparison groups towards one continuous targeted variable Univariate Comparison analysis One-way Analysis of Variance test (i.e. ANOVA) analysis SPSS Kruskal-Wallis test analysis
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
SPSS To measure the significant differences among more than two comparison groups towards more than one continuous targeted variable Multivariate Comparison analysis Multivariate Analysis of Variance test (i.e. MANOVA) analysis SPSS To determine the significant bivariate relationship between two continuous interested variables Univariate Correlation analysis Pearson’ Correlation analysis SPSS Spearman’s Rank Correlation analysis SPSS To examine causal and effect relationship between a set of independent variables paired with one continuous dependent variable Multivariate Correlation analysis Multiple Linear Regression (i.e. MLR)
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
analysis SPSS To examine causal and effect relationship between a set of independent variables, where these set of independent variables involve a categorical variable paired with one categorical dependent variable Multivariate Correlation analysis Logistic Regression analysisa or Multinomial Regression analysisb SPSS To examine causal and effect relationship between a set of independent variables, where these set of independent variables do not involve a categorical variable paired with one categorical dependent variable Multivariate Correlation analysis Discriminant analysis SPSS To examine causal and effect relationship between a number of independent and dependent variables with priority to confirming or rejecting the theories Multivariate Correlation analysis
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Covariance based Structural Equation Modelling (i.e. CB-
SEM) analysis AMOS To examine causal and effect relationship between a number of independent and dependent variables with priority to exploring the theories Multivariate Correlation analysis Variance based Structural Equation Modelling (i.e. VB-
SEM) analysis SmartPLS To refinement or reconstruct or confirm the variables’ structure that share a common variance Multivariate Correlation analysis Exploratory Factor Analysis (i.e. EFA) SPSS Note: aThis analysis can be used if the dependent variable constitutes two categories. bThis analysis can be used if the dependent variable constitutes more than two categories. 5.0 CONCLUSION This paper aims to provide practical guidelines that could assist Social Sciences researchers choose the best statistical software to conduct effective statistical testing for their research. In conducting statistical testing,
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
the researcher normally uses at least two types of statistical software for a complete procedure of statistical analysis. For example, when the researcher intends to conduct the preliminary data analysis (i.e. checking the missing values, checking the data distributions, etc.) the general statistical tools such as SPSS are considered a good choice. The researcher could further decide either to use AMOS or SmartPLS statistical software for testing their research hypothesis, as majority of Social Sciences researchers currently are keen to examine the causal and effect relationship between a number of independent and dependent variables in one theory. Nevertheless, it is very important to note that the selection of the best statistical software and appropriate statistical analysis is dependent very much on the research objectives and the research questions developed by the researchers. This is a prerequisite for any statistical analysis to be employed. Choosing the right statistical analysis helps researchers to derive accurate as well as robust results in order to explain the achievement of intended research objectives.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In this module, you will learn about drawing and presenting conclusions from data analysis so you will be better equipped to make key decisions regarding instructional strategies. Educators need to be prepared to make sense of the results from the data they have collected and analyzed to
gain insight into the problems that impede student learning. After engaging with this module, you should be able to do the following:
Derive conclusions from data analysis.
Explain the importance of representing data in a valid and transparent manner.
Explain how technology can facilitate the creation of visual representations of data.
Create visual representations of data.
"Data Fluency: Exploring and Describing Data"
The course "Data Fluency: Exploring and Describing Data" from LinkedIn Learning
focuses on the fundamentals of data fluency and ways to make
meaning from the data you collect. In the previous module, you learned how to apply statistics to your data to gain insights. In this lesson, you will learn how to draw conclusions from that data analysis. The following videos will help you get started:
"
Assess the Meaning of Data
" (4:05)
"
Assess the Ambiguities in Data
" (4:22) "Making Sense of All Your Data"
Read "
Making Sense of All Your Data
" from Principal Leadership
. The authors of this article have worked with many high schools to improve their use of data to enhance school reform initiatives. They also conducted a case study for the Northeast and Islands Regional Educational Laboratory at Brown University that examined the factors and conditions that either facilitated or impeded data use in five urban secondary schools. As a result, they have identified three important practices for developing data literacy among staff members and establishing purposeful data use. As you read the article, consider these three practices and how they impact teaching and learning in your classroom. Chapter 9: "Inferential Statistics and Data Interpretation"
Read pages 138–141
("Data Interpretation") of Chapter 9 in STEM Student Research Handbook.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
This brief section explains the importance of writing down evaluations about raw data when entering them into charts and graphs and the ways you can use these evaluations to write the analysis and conclusions portion of your research paper. You have derived conclusions after analyzing data. Now what? You will need to explain what you have concluded in a clear and concise manner to those who will both act on and respond to your conclusions. One way to do this through data visualization—using images to convey information. In this lesson, you will learn how to ensure validity
and transparency in data visualization. When it comes to data visualizations, validity refers to the accuracy of the data as it is represented. It is your responsibility to create responsible, informative data visuals to help you make key decisions regarding instructional strategies. Transparency refers to the extent to which the data being reported are accurate and can be traced to an official source. Understanding how to ensure validity and transparency can help you avoid compromising the integrity of your work.
"Data and Visualization Ethics"
Watch "
Data and Visualization Ethics
" (11:58) from Coursera.
In this video, you will learn the importance of ensuring that your data visualizations accurately reflect the data you collected. Since people often have ideas for what they want the data to say, they might create visualizations that skew information toward their biases. By learning how to carefully create and analyze data visualizations, you will ensure they are accurate and valid. "Data Visualization Effectiveness Profile"
Read "
Data Visualization Effectiveness
Profile
" from Perceptual Edge
.
In this article, the authors explain the seven criteria for an effective data visualization and divide those criteria into two general categories: the degree to which a visualization is informative and the degree to which it produces a useful emotional response. These criteria will be helpful when you create data visualizations to represent information for instructional strategies.
"Keep Scales Consistent"
Watch "
Keep Scales Consistent
" (2:10) from LinkedIn Learning.
In this brief video, the author describes the essentials of creating accurate and compelling charts and graphs while avoiding misrepresentation of
data. After watching the video, consider the challenges you may encounter when creating charts and graphs and how you can address them.
"Examples of Visualization"
Watch "
Examples of Visualizations
" (3:50) from LinkedIn Learning.
Data are used to convey information, but how can you use images to enhance the information you want to present to others? This video explains creating data visualizations as a process and gives you tips on how to consider integrity, meaning, simplicity, relevance, and beauty to inform your audience.
"Presenting and Using Student Learning Outcomes Assessment Results"
Read pages 23–25
("Use of Evidence of Student Learning") in
Making Student Learning Evidence Transparent: The State of the Art.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In this section of the report, you will learn about the many ways that institutions that were participating in a study on data transparency initiatives reported their assessment data on their websites. You will also learn about the implications associated with their reporting. The report
concluded that participation in national transparency initiatives appears to encourage more transparency in the reporting of results and of the use of assessment information, which influences the kind of information institutions collect and use to make decisions.
Chapter 1: "The Context of Data Visualization"
Read pages 16–17
("Defining Data Visualization") of Chapter 1 in Data Visualization: A Successful Design Process.
This section defines data visualization
and explains why it is important to consider it and respect the needs of the reader when you design data visuals. The author explains four key concepts of data visualization.
Chapter 8: "Graphical Representations"
Read Chapter 8
in STEM Student Research Handbook.
This chapter will familiarize you with the variety of graphical representations that you might use to present quantitative and qualitative data. You
will also learn how to determine which graphical representation is appropriate.
"Data Visualization: Best Practices"
Watch the following videos from the course "Data Visualization: Best Practices" from LinkedIn Learning.
"
When and How to Use a Pie Chart
" (5:40)
"
When and How to Use a Bar Graph
" (3:48)
"
When and How to Use a Line Graph
" (2:25)
There are a variety of chart and graph types that you can use to display data in a way that makes your analysis clear. In these videos you will learn when it is appropriate to use certain charts or graphs as well as best practices to ensure your visual representation of quantitative variables, positive correlation, and negative correlation are clear and accurate.
"Data Journalism: How to Create Compelling Content from Data"
Read "
Data Journalism: How to Create Compelling Content from Data
" from EContent
.
This article emphasizes how data visualization is a form of storytelling and stresses the importance of making strong visualizations that engage readers with your data. Although the article focuses on journalistic data, pay attention to the strategies and questions you should ask yourself while reflecting on your data and determining how you want to display the data to make your analysis clear.
Throughout this course you have learned about the use of technology to both collect and analyze data. In this lesson, you will explore how technology can support the visual representation of your analysis results. This information will be helpful as you consider how to effectively communicate to stakeholders what you have determined through your analysis and how your findings support your decisions about instructional
strategies.
"You Ask, I Answer: Best Simple Data Visualization Tools?"
Watch "You Ask, I Answer: Best Simple Data Visualization Tools?
" (7:16) from Christopher Penn.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In this video, Christopher Penn highlights several tools that can help you create visually appealing images to display your data. He starts with free
tools such as Google Data GIF Maker and Google Data Studio and highlights more advanced and fee-based tools like Tableau or Snagit. "Visualization Tools for Turning Information into Insights"
Read "
Visualization Tools for Turning Information into Insights
" from Online
. This article highlights several free and web-based tools that can help you create meaningful data visualizations. It also includes information on where you can view and analyze data visualizations from existing sources like the World Bank or searches on Google Insights. "Infographic Tips and Tools"
Read "
Infographic Tips and Tools
" from Talent Development. This article will provide you with a clear understanding about best practices for creating or sharing information from infographics. It offers information on tools for making infographics online and guidance to ensure the infographics that you create are tailored to your purpose and intended audience. "Excel Data Visualization: Mastering 20+ Charts and Graphs"
Explore the course "
Excel Data Visualization: Mastering 20+ Charts and Graphs
" from LinkedIn Learning
. The first part of this course reiterates key concepts about best practices for creating effective visualizations, which have been covered throughout this module. To learn more, focus on the following sections of the course:
"
Chart Formatting Options
" (5:27)
"
Creating, Modifying, and Applying Templates
" (4:06)
If you need more support, you can view selected videos in Part 3 of the course above to learn exactly how to create a variety of graph and chart types in Excel.
You reflect on the work you have done in the PLC and remember that in the scholarly literature review, you learned that standards should drive the assessments and the results of the assessments should drive the instruction. You go back to the book you summarized, Assessment and Student Success in a Differentiated Classroom
. There are key points within that book that you and the team can use to emphasize the findings of the PLC. With all the data that the team gathered, the administration must make decisions. They have all the data and know three things: 1.
The teachers are not following the curriculum. 2.
The curriculum has gaps relative to the standards. 3.
Teachers are not making special education modifications according to the students' IEPs. Question 1
This is not a form; we suggest that you use the browse mode and read all parts of the question carefully.
The administrators at Northern Oaks Middle School (NOMS) used the decision-making cycle and need to make some decisions. Which decisions are appropriate?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
o far you have learned how to identify a problem, select data and scholarly sources to investigate the problem, collect and analyze data to find the causes of the problem, and derive conclusions from data analysis. In this lesson, you will continue to work with data as you learn to make decisions based on the results of your analysis. This lesson will provide an overview of data-driven decision-making and highlight its cyclical nature.
"Data-Informed Practice"
Review "
Data Informed Decision Making Cycle
" from the Association of Independent Schools.
This infographic explains parts of the decision-making cycle and applicable questions. This is one of many decision-making cycles that you have encountered or may encounter throughout your career. While many of these cycles are similar, they are all iterative. "Using What You Know to Plan"
Read "
Using What You Know to Plan.
" This article examines various ways that data-driven decision-making is used in early education settings. The author discusses the use of the formal, evidence-based process by teachers, benefits of the use of child and family data, and the role of teachers in helping children.
"Becoming a Data-Driven Decision-Making Organization"
Read "
Becoming a Data-Driven Decision-Making Organization
."
This article focuses on reasons why not-for-profit organizations should adopt data-driven decision-making strategies to manage their operations.
The authors discuss seven challenges that these organizations face when they make the transition to data-driven decision-making. The article offers solutions to each one.
"Defining Decision-Making"
Watch "
Defining Decision-Making
" (2:05) from LinkedIn Learning.
In this video, you will learn the steps to the decision-making process so you can be better prepared to apply your decisions to improving instructional strategies.
Students who attend vocational classes at the district career center are consistently late for the third period classes at your school. You are on a PLC that is tasked with solving the problem of students arriving 20 minutes late to every class period, and teachers find it challenging to deliver instruction in a way that keeps the rest of the class engaged and meets these students' needs. You decide to use the iterative data-driven decision-making process to decide how to solve the problem.
You know the problem is that the bell schedule at the career center is not aligned with the system at your school, and there is nothing you can do to change that. Next, you look at the current student achievement levels of the vocational students in the third period classes, and you review articles on off-site vocational high school classes to see how others have managed the problem. The PLC creates a spreadsheet of the students' current grades and realizes that the career center students' current percentage in their third period classes average 18 percent lower than the other students in their third period classes. The group notices that two teachers' career center students are not performing lower than the rest. The team decides to ask those teachers what they are doing to support those students. Ms. Connor and Mr. Long come to the next PLC meeting and explain that they have flipped the instruction in their classes so students do online learning at home and then complete practice
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
assignments in class. The team decides to have Ms. Connor and Mr. Long provide professional development to the other third period teachers so
they can learn to flip instruction too.
What needs to be included for this scenario to represent an iterative decision-making cycle?
After Ms. Connor and Mr. Long demonstrate their flipped classroom strategies to the other third period teachers, the PLC establishes regular times to compare the career center students' grades to their classmates' to see if the flipped classrooms are being implemented and, if they are, if they are effective in improving students’ grades.
Correct! The team did an excellent job of working through the problem-solving steps when they identified the problem, selected data, analyzed it, made inferences, made a decision, and created an action plan; however, the team forgot a very important part of the data-driven decision-
making cycle because they did not make a plan for how the results would be monitored. This added step completes the iterative data-driven decision-making cycle.
“Data-Based Decision-Making: Importance and Overview”
Read "
Data-Based Decision-Making: Importance and Overview
" at MoEdu-SAIL.
In this short introduction, the authors provide information on the ways that data can help teachers make key decisions. Included are two videos that provide examples of the benefits of using data to influence choices that impact student learning.
“Data-Based Decision-Making in Practice: Step 2”
Read "
Data-Based Decision-Making in Practice: Step 2
" at MoEdu-SAIL.
This article first asks, "How are you currently analyzing the data that you collect? How does that help you prioritize action steps?" Through this source, you will learn how to work with others to focus on students' strengths and weaknesses. This leads to a focused dialogue on specific elements of proficiency.
“Four Practical Tips for Using Data to Inform Planning and Decision-Making”
Read "
Four Practical Tips for Using Data to Inform Planning and Decision-Making
" at ASCD.
In this reading, you will explore ways to improve data use in schools to inform planning and decision-making. After reading this article, think about which tips are most appealing and what it would take to implement them in your instructional setting.
“Embracing Data-Informed Decision-Making”
Watch "
Embracing Data-Informed Decision-Making
" (1:50) from Ellucian.
Find out how Montgomery County Community College depends on data analytics and a campus-wide collaborative effort between faculty and staff to break down departmental silos to make targeted improvements. What can you learn from this college and their success? Would it be beneficial to share their experience with your colleagues?
“Data Driven versus Data Informed”
Watch "
Data Driven versus Data Informed
" (2:19) from LinkedIn Learning.
This video emphasizes the importance of being data informed and not data driven. The author suggests relying on intuition as well as the data available to you.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
In most educational settings, policies and procedures change to address new challenges and desired outcomes. Your role will depend on your ability to remain flexible while striving to make continuous improvements in instruction. In this lesson, you will learn about how to use data to make instructional strategy decisions that will impact current and future programs and initiatives.
Chapter 1: "Why Data Matter"
Read pages 7–11
("Computer Software and Data-Driven Decision-Making" to the end of
"Homegrown Efforts") of Chapter 1 in Using Data to Improve Schools: What's Working by the American Association of School Administrators
.
This reading examines the ability of data to provide quantifiable proof. It also examines how a person can determine which data to collect based on what is important to know about issues tied to district goals and why data-driven school improvement helps stakeholders know whether a district and its schools are realizing their vision and purpose.
“Data and Program Improvement”
Read "
Data and Program Improvement
" from Techniques: Connecting Education & Careers.
This article explains how evidence-based decision-making and data system innovation have enabled education programs to leverage data to improve programs and enhance student outcomes. As a result, data-driven decision-making is now encouraged on the local, state, and federal levels.
Chapter 1: "Why Data Matter"
Read pages 5–7
("Challenging Assumptions" to the end of
"Asking the Right Questions") of Chapter 1 in Using Data to Improve Schools: What's Working by the American Association of School Administrators
.
In this brief excerpt, the authors discuss using data to provide quantifiable proof. They emphasize that determining which data to collect is based
largely on first figuring out what is important to know about various issues such as student performance, teacher quality, and parent and community satisfaction. Chapter 2: "Using Data to Make Smart Decisions"
Read pages 20–21
("Walking Through Data Collection") of Chapter 2 in Using Data to Improve Schools. What's Working
In this section, the authors describe how data are used to drive continuous improvement at the Palisades School District in Upper Bucks County, PA. The results of teacher and administrative efforts are seen in improved test scores and the selection of postsecondary schools that graduates can now enter.
"Decisions, Decisions, Decisions: Using Data to Make Instructional Decisions for Struggling Readers"
Read "
Decisions, Decisions, Decisions: Using Data to Make Instructional Decisions for Struggling Readers
" from Teaching Exceptional Children
.
This article examines a data-based decision-making process for individualizing instruction with student-level data to better understand students' persistent academic difficulties. The authors discuss the use of data-based decision-making to improve student outcomes through tracking growth, which informs instruction and data collection tools. Teachers can also use the student data.
"Module 4: Using Data to Inform Instruction"
Watch "
Module 4: Using Data to Inform Instruction
" (2:58) from Michigan Virtual.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
This video explains the use of data to design and redesign pathways and strategies for all students to reach their learning goals. The video emphasizes the constant use of data and quick decision-making to positively impact student outcomes.
"Your Child's Education, Explained: How the Heck Do Teachers Use 'Data' to Inform Instruction?"
Watch "
Your Child's Education, Explained: How the Heck Do Teachers Use 'Data' to Inform Instruction?
" (1:31) from LinkedIn Learning.
Ever wonder how teachers use data to understand how students are progressing? This video explains the way data support teachers' efforts to make necessary adjustments to instructional strategies as they strive to increase student success.
"What Is Student Data?"
Watch "
What Is Student Data?
" (2:59) from Data Quality Campaign.
After watching this video, think about how you can leverage student data to adjust instructional practices to improve student learning experiences. The video defines student data and explains the various types of student data that teachers can use to improve instruction.
School Report Cards
There are many instances in educational organizations when results from data analysis are made public. In this lesson, you will learn how results from data analysis can be displayed to promote consistency and transparency. Many states post school report cards for public schools. Here are a couple examples you may view:
New Jersey School Performance Reports
Texas Education Agency School Report Cards
"Ethical and Appropriate Data Use Requires Data Literacy"
Read "
Ethical and Appropriate Data Use Requires Data Literacy
" from Phi Delta Kappan
.
As you read, consider ways that data can be used and disseminated in a responsible and ethical way. What could impede your efforts to do so? Which steps could you take to prepare for such challenges?
"The Ethics of Data—Personal Data and Privacy"
Watch "
The Ethics of Data—Personal Data and Privacy
" (6:14) from BBC Research & Development.
Learn the difference between personal and open data through this video. It includes a series of interviews with people from varying industries who offer their opinions on data, their importance, and ethical practices for their use.
A teacher goes to the school’s in-house database to find a student’s academic background and discipline records. She finds the data on the student and would like to save it and keep it on her personal computer for future reference. Which ethical consideration should she take to maintain the privacy of the student?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help