EDSP 725 - Discussion Board 2

docx

School

Liberty University *

*We aren’t endorsed by this school

Course

725

Subject

Sociology

Date

Apr 3, 2024

Type

docx

Pages

3

Uploaded by MateFlag4267

Report
EDSP 725 – Discussion Board #2 (Jan. 21, 2024) When considering the information presented by the authors and researchers of our readings this week, I am struck by just how thin and difficult it is to find the threads of commonality are in special education research. While Graham (2009) offers an overview of the need for quality indicators(QI) in research, as well as in the systematic review of research literature, it almost appears as if the authors are no closer to a definitive list of quality indicators than they were four years prior. As Cook et al. (2008a) and Talbott et al. (2018) assert similar needs for QI a decade apart, this confirms QI for special education is a difficult task, if not impossible. Researchers have made progress, and Cook et al. (2008a, 2008b, & 2009) makes an effort to summarize the many QI parameters and tools used to better define evidence-based practices in areas similar to special education, as a way of transferring those tools to the field of special education. While different authors offered different numbers of indicators and additional methods and means of using these QIs, Cook et al. (2009) narrowed the methods to just 4 main categories, asserting that researchers and teachers need to carefully examine studies to determine if they are high-quality and producing consistent results. Research design plays a key role in obtaining information that is valid and reliable in the field of special education, most often using an experimental group design with a well-structured control and intervention group (Cook et al., 2008b). The quality of the research needs to be of the highest level, with many studies converging on the same evidence, which adds to the confidence of the findings. Methodological quality was a consistent thread among all the researchers and authors this week, noting that implementation fidelity and variable control was critical to high-quality research and reviews (Cook et al., 2008b; Cook et al., 2009; Talbott et al., 2018). Finally, what is the magnitude of effect? What methods and tools are used to measure the impact of the intervention, and just how
effectively does the intervention impact student performance are two questions that must be answered to determine the validity of the study or review being conducted. Talbott et al. (2018) raised one point I had not previously considered. Not all research is transparent or conducted with consistency. It seems obvious, but in my mind, I have always just assumed that every researcher is making every effort to be as clear and concise as possible, otherwise the results are not very useful, and much time is wasted conducting research no one can use. But perhaps they have other reasons for using the methods or analyzing the results in a particular way, which I will have to carefully consider in the future when conducting my own reviews. I am curious if anyone has tried these QIs to determine the reliability of a study? If so, is it as challenging as it sounds? Additionally, I had never considered that most special education teachers might be jaded to research results. I thought perhaps that was unique to my particular team, because of the environment we work within daily. Apparently, this is a common problem for special education teachers, and one that will be hard to address and change (Talbott et al., 2018). Teachers will have to continue to evaluate all the research, and combined with their own experience and wisdom, make the best decisions for their students, day to day and sometimes minute by minute. Experience can be the best teacher. Paul reminds Timothy in 2 Timothy 2:15-16, “ Do your best to present yourself to God as one approved,   a worker   who has no need to be ashamed, rightly handling the word of truth.   But   avoid   irreverent babble, for it will lead people into more and more ungodliness” ( English Standard Bible , 2016). Just as with the scriptures, teachers must do the same with research and best practices. They must be diligent to discern the difference between legitimate information and unsupported research, to avoid doing more harm than good,
as referenced by Cook et al. (2008a).   As if the job was not difficult enough, there is always more homework for the teacher than the student. References Cook, B. G., Landrum, T. J., Cook, L., & Tankersley, M. (2008a). Introduction to the special issue: Evidence-based practices in special education.  Intervention in School and Clinic, 44 (2) 67-68. https://doi.org/10.1177/1053451208321599 Cook, B. G., Tankersley, M., Cook, L., & Landrum, T. J. (2008b). Evidence-based practices in special education: Some practical considerations.  Intervention in School and Clinic, 44 (2), 69-75.  https://doi.org/10.1177/1053451208321452 Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education.  Exceptional Children, 75 (3), 365-383. English Standard Bible . (2016).  https://esv.literalword.com/   (Original work published 2001) Graham, S. (2009). Exceptional children: Preview.  Exceptional Children, 75 (3), 262. Talbott, E., Talbott, E., Maggin, D. M., Van Acker, E.,Y., & Kumm, S. (2018).  Quality indicators for reviews of research in special education . Springer International.  https://doi.org/10.1080/09362835.2017.1283625
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help