SPED 5302.WK.1_Assignment
pdf
keyboard_arrow_up
School
Lamar University *
*We aren’t endorsed by this school
Course
5302
Subject
Computer Science
Date
Dec 6, 2023
Type
Pages
14
Uploaded by BaronFreedomKookabura27
Assessment Terminology
XXXXXXXX
Educational Leadership Department, Lamar University
SPED 5302 - X32-13
Tests, Measurement, and Evaluation
Dr. Michele Marjason
November 12, 2023
Assessment Terminology 1
Introduction
The following sixty terms are associated with tests, measurements, and
evaluations and are organically described and defined. Explained using language that a
person without a background in education, assessment, or special education may
understand. Acronyms, visual representations, and examples when it is appropriate.
1. Age equivalent
Age equivalent is a score that compares the performance of individuals of the
same age with one another. (Pierangelo & Giuliani, 2017, p. 309)
For example,
if your
child is seven years old and he scored a 70 on a test, which is average for 7-year-olds,
then your child’s age equivalent score would be 7
2. Alternate Forms Reliability
Alternate forms reliability occur when an individual or a group takes two versions
of the same test at different times, and the results are compared to see if the tests are
trustworthy. (Pierangelo & Giuliani, 2017, p. 309)
For example,
if your child and his
classmates take a mid-year math test and then take a different version of the same test
at the end of the school year, the results are compared to see if the tests are
dependable
.
3. Assessment
An assessment is an evaluation to discover what someone knows, what they
have learned, and their strengths and weaknesses. There are formal and informal
assessments. (Pierangelo & Giuliani, 2017, pp. 309-310)
For example,
Your child
takes a weekly spelling assessment, which tells the teacher how many words your child
has mastered and how many he needs to keep practicing.
4. Chronological Age
It is the student’s actual age when an assessment is administered. (Pierangelo &
Giuliani, 2017, p. 311)
For example,
if your child takes an evaluation today, November
10th, 2023, and his date of birth is May 17th, 2003, then his chronological age would be
twenty years and six months.
5. Concurrent Validity
It refers to the precision with which a person’s current performance estimates that
person’s performance on a criterion test like a test given by a teacher on a unit of study
if taken approximately at that time. (Pierangelo & Giuliani, 2017, p. 311)
For example,
if your child has been getting perfect marks on his homework and class activities in his
Assessment Terminology 2
unit-study about plants, then we may estimate that he will get a score of “A” on his
Friday plant assessment created by his teacher’s team.
6. Construct Validity
It is a measure used to validate tests. It determines how well a test measures
what it is supposed to measure. It compares the test in question to other tests that
measure similar concepts or qualities to see how high their connection or relation is
(Pierangelo & Giuliani, 2017, p. 311).
For example,
if your child takes the WJ-IV
Achievement test and then also takes the WIAT-III test of achievement, which measures
similar concepts, then we compare them to see their connection.
7. Content Validity
It tells us if each item in a test represents what is being assessed. (Pierangelo &
Giuliani, 2017, p. 312)
For example,
Your child takes a test, and a question in this test
asks him to listen to the word dog. Then, it presents four letters, including the letter “d”
on a page, and asks him to choose the letter the word dog starts with. The test claims
that it assesses your child’s alphabet knowledge, name, and sounds of the English
letters.
8. Content-Referenced Test
The Content Referenced Test cares and measures if the students mastered the
specific skills taught. Every question comes from the learning objectives. In other words,
how well a student does in this content-referenced test tells if the student has mastered
the material. (Pierangelo & Giuliani, 2017, p. 312)
For example,
Your child has been
learning about the planets, and one of the learning objectives says that your child will
name the planets in our solar system. Then, one of the questions in the test asks your
child to name the first five planets in the Milky Way.
9. Convergent Validity
Convergent validity tests whether two measures that are expected to be related
are related. (Pierangelo & Giuliani, 2017, p. 312)
For example,
your child is assessed
on his behavior and learning of a concept. The behavior test shows that he is focused
and listening attentively, and your child got an A in the content test. These two tests are
expected to be related. Good focus contributed to his excellent performance.
10. Correlation
Correlation explains the relationship that exists between two variables or
parameters that can be measured numerically. (Pierangelo & Giuliani, 2017, p. 312)
For example,
if a child is in kindergarten and takes a test on sight words, he gets ten
words, while another child in second grade takes the same sight words test and gets 22.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Assessment Terminology 3
The correlation explains that there is a relationship between the age and the number of
sight words the children knew. We expected the second grader to know more sight
words than the Kindergartener. We expect that an increase in age will increase
knowledge.
11. Criterion-Referenced Test
Criterion-referenced tests, or CRT, are created by the teacher, the school, or a
publisher. These individuals decide what questions go in the tests and what is an
acceptable mastery score. Therefore, the CRTs are graded against what the test’s
creators determine the level of mastery. (Pierangelo & Giuliani, 2017, p. 312)
For
example,
Your child takes a weekly spelling test on a specific word family that the
teacher created based on the week’s learning objective. The teacher also decided that
spelling nine out of ten words is an acceptable level of mastery of the weekly learning
objective.
12. Criterion-Related Validity
Criterion-related validity refers to a technique for assessing if a test is valid by
comparing its outcome or scores with another well-known or established criterion test
that measures the same skills or standards. (Pierangelo & Giuliani, 2017, p. 312)
For
example,
Your child takes a criterion math test created by the teacher that measures
math operations, specifically addition-word problems. Then, in the following week, your
child takes a criterion math test created by a publisher that measures the same skills.
Then, the scores are compared for validity and reliability.
13. Curriculum-Based Assessment
Curriculum-based assessment, also known as CBA, is a curriculum-direct
assessment that records the student’s performance on skills in the core areas of
language and math of the curriculum, and its results are used to develop IEP goals and
instruction. (Pierangelo & Giuliani, 2017, p. 312)
For example,
Your child takes a
comprehension assessment from the publisher’s adopted curriculum, and its areas of
improvement help set goals and follow up on instruction.
14. Curriculum-Based Measurement
Curriculum-based measurement, also known as CBM, evaluates how successful
instruction is, and its results guide educators’ instructional changes to develop better
teaching methods that consequently will improve student achievement. (Pierangelo &
Giuliani, 2017, p. 312)
For example,
Your child takes a comprehension assessment
from the publisher’s adopted curriculum, and its areas of improvement help the teacher
review the instructional methods and develop instructional changes to improve the
child’s understanding and grasping of knowledge.
Assessment Terminology 4
15. Deciles
Deciles sort large amounts of quantitative or numeric information into tenths of
ten equal large sets, ranking them from highest to lowest values or vice versa. Deciles
divide the data into groups that are easier to analyze and measure.(Pierangelo &
Giuliani, 2017, p. 313)
For example,
Your child’s test scores are the seventh decile; it
is when the score of his assessment falls below 70 percent.
16. Discriminant Validity
Discriminant validity tells whether two measures that are expected to be
unrelated are, in fact, unrelated. (Pierangelo & Giuliani, 2017, p. 313)
For example,
Your child’s math test will have a low correlation between social and emotional
measurements.
17. Dynamic Assessment
Dynamic assessment focuses on and compares a student’s current and past
learning and performance. (Pierangelo & Giuliani, 2017, p. 313)
For example,
Your
child’s teacher has been monitoring and comparing your child’s math performance
before and after providing some support through cues and prompts.
18. Ecological Assessment
Ecological assessment is a way to assess a child by observing him in his natural
environment. (Pierangelo & Giuliani, 2017, p. 314)
For example,
Your child’s teacher
makes observations of your child in the different environments in which your child
interacts and discovers that areas like the gym and the cafeteria agitate him. This data
will guide the teacher in developing accommodations for your child.
19. Grade Equivalent
The grade equivalent is a test score that compares children’s performances in
the same grade with one another. (Pierangelo & Giuliani, 2017, p. 315)
For example,
Your child ’gets a grade equivalent score of 3.6, which means that your child is
performing at the level of the average student in the 3rd grade, 6th month
20. Informal Reading Inventory
Informal reading inventory, also known as IRI, is a tool created either by a
publisher or a teacher, and it is used first for diagnosing students reading weaknesses,
then assessing students’ progress, and lastly, planning interventions for the students.
(Pierangelo & Giuliani, 2016, p. 317)
For example,
Your child’s teacher administered
the IRI bought from a publisher at the beginning of the year. The data gathered from this
assessment guided her instruction, especially in developing the interventions for your
Assessment Terminology 5
child. She monitored your child’s progress for the following nine weeks and
administered the IRI again to compare data and assess your child’s progress.
21. Instructional Planning
Instructional planning is when a teacher uses data collected from her students’
evaluations, interprets this data, and identifies her students’ needs and strengths to
guide her planning instruction. The teacher then develops instructional strategies to
address the students’ needs for the students to be successful in accessing the
curriculum. (Pierangelo & Giuliani, 2017, p. 317)
For example,
Your child’s teacher
receives data from every formal and informal assessment your child takes. Then, she
uses that data to plan her classroom instruction so that she addresses your child’s
needs.
22. Interrater Reliability
Interrater reliability is when two or more evaluators or examiners go and observe
children’s specific behaviors independently in their environment. They record and report
their observations. Then, this information is combined and analyzed to address the
child’s needs. (Pierangelo & Giuliani, 2017, p. 317)
For example,
Your child has some
behavioral difficulties, and two evaluators independently observe and record your child’s
behavior in his natural environment. Then, they each report their observations, and their
data is combined and analyzed to understand better and address your child’s needs.
23. Interval Scale of Measurement
The interval scale of measurement is a numerical value measuring the difference
of equal distance between two points with a random zero point. (Pierangelo & Giuliani,
2017, p. 317)
For example,
Your child’s teacher uses the school’s calendar and
chooses two different dates during the first nine weeks of school to make observations
of your child.
24. Learning Styles Assessment
Learning styles assessment tries to capture the ways and elements that guide a
person’s learning. (Pierangelo & Giuliani, 2017, p. 317)
For example,
Your child’s
teacher is trying to discover how your child learns best, and the learning styles
assessment would help by answering questions like: Is this child a visual learner or an
auditory learner? Etc.
25. Mean
The mean is a numerical value, an average of scores. (Pierangelo & Giuliani,
2017, p. 317)
For example:
Your child’s grades are 80, 82, 88, and 75. Then, to
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Assessment Terminology 6
calculate the mean, you will add them and divide the total by the number of scores, and
it gives us the average of numbers or mean. Like this: 80+82+88+75= 325/4 = 81
26. Measures of Central Tendency
Measures of central tendency show the place in which most of the score values
lay in the distribution. (Pierangelo & Giuliani, 2017, p. 317)
For example,
Your child’s
teacher calculates the mean or average, the median or middle number of the set, and
the mode or the grade that appears more often and places these values in the
distribution. This placement of the scores in the distribution will give a clear picture of
his score values in relation to his classmates, etc.
27. Median
Median is the number in the middle of a set of numbers. It divides the scores into
two parts: above and below the median. (Pierangelo & Giuliani, 2017, p. 317)
For
example,
Your child’s teacher finds the score for the class and your child’s. It is
essential because it tells her if your child’s score lies below or above the median.
28. Mode
Mode is the number or score that appears more often when given a set of
numbers. (Pierangelo & Giuliani, 2017, p. 318)
For example:
Your child’s grades are
80, 85, 80, 82, 72, 80; then, the mode in this case would be 80 as it appears more often.
29. Nominal Scale of Measurement
The nominal scale of measurement does not have a numerical value; it has a
descriptive characteristic. It separates the data into categories and then tracks how
many times it occurs, creating, in this manner, a nominal scale. (Pierangelo & Giuliani,
2017, p. 318)
For example,
Your child's school is gathering data from all children in the
elementary school. Administrators divide the data collected into categories and record
how many they get into each category, like First Grade = A, Second Grade = B, Third
Grade =C, Fourth Grade = D, and Fifth Grade = E
30. Normal Distribution
Normal distribution, also known as the bell curve, illustrates where a test score
would be if the test were administered to every student of the same grade and age in
the general population. (Pierangelo & Giuliani, 2017, p. 318)
For example,
(Pierangelo & Giuliani, 2017, p. 37)
Assessment Terminology 7
31. Norm-Referenced Test
Norm-referenced test, also known as NRT, is a standardized test that compares,
links and ranks test takers to one another. The NRT interprets how the student’s
performance data compares with a particular group’s. (Pierangelo & Giuliani, 2017, p.
318)
For example,
Your child’s teacher looks at standardized test performance data of
her class, and she can identify that a small percentage of her students are, on the one
hand, performing poorly and, on the other hand, very well, while most of her students
are performing at an average level.
32. Ordinal Scale of Measurement
The ordinal scale of measurement uses the rank order system. In other words, it
orders the data indicating only relative amounts. (Pierangelo & Giuliani, 2017, p. 319)
For example,
Your child participates in a swimming competition, and the swimmer that
wins the competition is noted as being in 1st place, the person who finished 2nd is
indicated as being in 2nd place, and so on. The time each swimmer took to gain their
place in the competition does not need to be explained.
33. Outcome-Based Assessment
Outcome-based assessment evaluates the outcomes of skills taught that are
fundamental in the individual’s real life. (Pierangelo & Giuliani, 2017, p. 319)
For
example,
Your teacher’s child’s goal is to teach him how to wash his hands with soap
and water before a meal, after using the bathroom, and after playing on the sand or
mud table. Washing hands is a real-life needed skill. Then, the teacher teaches and
evaluates the skills whenever the student washes his hands.
34. Percentile
Percentile, or Percentile Rank, is a numeric score representing the percentage at
or below a given value. (Pierangelo & Giuliani, 2017, p. 319)
For example
, Your
teacher’s child tells you your child received the percentile rank 75. In other words, the
teacher says that your child did as well as or better than 75 percent of the people who
took the assessment.
35. Performance-Based Assessment
Performance-based assessment, or Naturalistic-based assessment, is a
technique in which an individual applies knowledge to real-life activities or situations.
(Pierangelo & Giuliani, 2017, p. 319)
For example,
Your child learned to add and
subtract numbers up to 20 and learned the names and values of the United States
coins. He goes to a convenience store and buys a snack for 0.45 cents with a $1.00 bill.
He can pay and count his change correctly.
Assessment Terminology 8
36. Portfolio
The portfolio is a collection of intentional artifacts that might be used to assess an
individual. It reflects the individual’s effort, growth, and successes in one or different
areas. (Pierangelo & Giuliani, 2017, p. 320)
For example
, Your child’s teacher is
implementing a portfolio this year to grade part of the skills in social studies. The
teacher developed an intentional list of the artifacts in the portfolio to show mastery of
the skills assessed at the end of the nine weeks.
37. Predictive Validity
Predictive validity is a tool that allows us to predict with some accuracy about an
individual’s future behavior or performance. (Pierangelo & Giuliani, 2017, p. 320)
For
example,
Your child has learned to walk in line in the cafeteria, grab his food tray with
his two hands, sit at the table to eat, speak softly, and raise his hand if he needs help.
From now on, the teacher can predict your child will be successful in eating lunch in the
cafeteria.
38. Protocol
The protocol is a booklet used when assessing an individual. The examiner
records the individual’s answers and scores. (Pierangelo & Giuliani, 2017, p. 320)
For
example,
The diagnostician, a trained professional able to test and analyze tests,
administers an achievement test on your child. She used two booklets. One in which
she recorded your son’s answers and then a book response for your child to record his
answers.
39. Purpose of Assessment
The purpose of assessment is the intentional objective for administering that
assessment, like to find out if a child needs special education services to access the
general education curriculum successfully. Other reasons may include developing IEP
goals and planning instruction. (Pierangelo & Giuliani, 2017, p.5)
For example,
The
diagnostician, a trained professional able to test and analyze tests, explains that she
administered a cognitive test to find strengths, weaknesses, and possible disabilities.
40. Quartiles
Quartiles explain data divided into four defined areas or quarter sections.
(Pierangelo & Giuliani, 2017, p.321)
For example,
The first quartile area is (1-25),
marking the lower quarter, also known as the bottom 25%. The second quartile is (26-
50). The third quartile is (51-75), and the fourth quartile is (76-100)also known as the
upper quarter or top 25%.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Assessment Terminology 9
41. Range
A range is a numeric value representing an area calculated by subtracting the
high and low scores. (Pierangelo & Giuliani, 2017, p.321)
For example,
Your child’s
teacher presents data explaining your child’s language arts grades as follows: 68,
71,46, 83, 90. The range is 90 - 46 =44
42. Raw Score
The raw score is a numeric value that tells the number of correctly scored
answers in a test. Raw scores have not been weighted or manipulated, and they are
usually the first scores teachers see when interpreting data. (Pierangelo & Giuliani,
2017, p.321)
For example,
Your child takes a test and gets 8 out of 10 correct answers.
It means that your child got 80% correct on the assessment.
43. Reliability
Reliability is when an assessment consistently provides the same score when
given more than once to the same subject. (Pierangelo & Giuliani, 2017, p.321)
For
example,
Your child takes the same test twice, and in both cases, he gets the same
score.
44. Reliability Coefficient
The reliability coefficient is a numerical value that tells how dependable an
assessment's outcome is over time. (Pierangelo & Giuliani, 2017, p.321)
For example,
The second-grade teacher reviews her class MAP-test reliability coefficient and finds
that it is 0.85. The 0.85 is a good score since the coefficient range goes from 0.00 -
1.00, where 1.00 is excellent, and 0.00 is undesirable.
45. Scaled Score
The scaled score is the conversion of raw scores into a common scale. The
numerical new value allows for comparing scores between students. (Pierangelo &
Giuliani, 2017, p.322)
For example,
a professor at Lamar University uses the following
data to explain his class pass-fail scores.
Assessment Terminology 10
46. Skewed Distribution
A skewed distribution is one in which the majority of the scores do not fall as
expected, in the middle of the distribution, but rather to the high or low ends of it.
(Pierangelo & Giuliani, 2017, p.322)
For example:
(wikipedia, Skewness, 2023)
47. Split-Half Reliability
Split-half reliability, also known as Internal consistency, is a technique that divides
the items in a test into two equal parts intending to estimate its reliability. (Pierangelo &
Giuliani, 2017, p.323)
For example,
Dividing a given test into its even or odd
questions-responses, or first half of the test and second half.
48. Standard Deviation
A standard deviation, also known as SD, is a numerical value that represents
how much and far the scores spread from the average. (Pierangelo & Giuliani, 2017,
p.323)
For example,
In a normal distribution with a mean of 100, one standard
deviation above would be 100 -115, 2+SD would be 115-130, 3+ SD would be 130-140
and keeps on going. Now, the 1-SD would be 85-100, 2-SD 70-85, 3-SD 50-70 and so
on.
49. Standard Error of Measurement
Standard error of measurement, also known as SEM, recognizes an existing
error between observed and estimated scores. (Pierangelo & Giuliani, 2017, p.323)
For
example,
Your child’s score is 115 with a SEM of 3 in his WISC-IV assessment. The
publisher gives a standard deviation of -3 and +2; therefore, there is a chance or a
probability that your child’s true score will fall between 112 and 118.
50. Standard Score
The standard score, also known as the Z score, is a score that has been
changed to fit in a normal curve and has a mean and standard deviation that remains
unchanged.
(Pierangelo & Giuliani, 2017, p.323)
For example:
Assessment Terminology 11
51. Standardized Test
The standardized test is an assessment administered in the same manner under
the same circumstances, the same amount of time, and the same scoring and
interpretation. The test has the same questions to answer. (Pierangelo & Giuliani, 2017,
p.323)
For example,
The State of Texas Assessments of Academic Readiness
(STAAR) is a standardized test that evaluates children from third grade up in different
core subjects like Reading, Math, Science, and Writing.
.
52. Standards-Referenced Test
Standards-referenced test measures if the children meet the standards and
acquire the knowledge that is needed for the core subjects and their grade levels.
(Pierangelo & Giuliani, 2017, p.323)
For example,
The State of Texas Assessments of
Academic Readiness (STAAR) is an example of a standards-reference test.
53. Stanine
Stanine, also known as “standard nines,” is a one-digit score that ranges from 1
to 9, with an average of 5 and a standard deviation of 2. (Pierangelo & Giuliani, 2017,
p.323)
For example:
54. T score
T scores estimate values for standard deviations when we do not know the exact
SD. It is an alternative way to explain an individual’s performance on a test. It has a
mean of 50 with a SD of 10. (Pierangelo & Giuliani, 2017, p.324)
For example,
When
we do not have a way of calculating a standard deviation, then we can use T scores to
estimate values for a standard deviation.
55. Task Analysis
Task analysis is a step-by-step procedure, a task broken into its more minor
elements and put into the necessary sequential steps to accomplish a task. (Pierangelo
& Giuliani, 2017, p.324)
For example,
The teacher teaches a child how to wash hands
following a graphic organizer.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Assessment Terminology 12
56. Test-Retest Reliability
Test-retest reliability shows the consistency of an assessment; its outcome is
unchanged over time. (Pierangelo & Giuliani, 2017, p.324)
For example,
Individuals
may take the same assessment more than once, and the outcome will be the same.
57. Validity
Validity explains that a tool measures what it was created to measure.
(Pierangelo & Giuliani, 2017, p.324)
For example,
Your child’s teacher made a test to
assess two-digit addition and subtraction problems, and the test has ten appropriate
two-digit addition and subtraction problems. We can say that the assessment or tool
measures what it meant to measure; therefore, it is valid.
58. Validity Coefficient
The validity coefficient is a numerical value that shows a correlation between the
assessment and the criterion measure. (Pierangelo & Giuliani, 2017, pp.324-325)
For
example,
First-year college students’ ACT scores may predict their first-year college
grade point average. The correlation between the ACT scores and the first-year college
grade point average is the validity coefficient of the ACT score measuring tool.
59. Variance
Variance is a numerical value that tells the data spread within a distribution.
(Pierangelo & Giuliani, 2017, p.325)
For example:
60. Z score
Z-score, also known as a standard score, has a mean of zero and a standard
deviation of one. (Pierangelo & Giuliani, 2017, p.323)
For example:
Assessment Terminology 13
References:
Pierangelo, R.A., & Giuliani, G. A. (2017).
Assessment in Special Education: A Practical Approach
(What’s New in Special Education)
(5th ed.). Boston, MA: Pearson Education Inc.
Skewness
.(2023, November 8). In Wikipedia.
https://en.wikipedia.org/wiki/Skewness
Related Documents
Recommended textbooks for you

Principles of Information Systems (MindTap Course...
Computer Science
ISBN:9781305971776
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Fundamentals of Information Systems
Computer Science
ISBN:9781305082168
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Fundamentals of Information Systems
Computer Science
ISBN:9781337097536
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Enhanced Discovering Computers 2017 (Shelly Cashm...
Computer Science
ISBN:9781305657458
Author:Misty E. Vermaat, Susan L. Sebok, Steven M. Freund, Mark Frydenberg, Jennifer T. Campbell
Publisher:Cengage Learning

Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781305627482
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning

Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781285196145
Author:Steven, Steven Morris, Carlos Coronel, Carlos, Coronel, Carlos; Morris, Carlos Coronel and Steven Morris, Carlos Coronel; Steven Morris, Steven Morris; Carlos Coronel
Publisher:Cengage Learning
Recommended textbooks for you
- Principles of Information Systems (MindTap Course...Computer ScienceISBN:9781305971776Author:Ralph Stair, George ReynoldsPublisher:Cengage LearningFundamentals of Information SystemsComputer ScienceISBN:9781305082168Author:Ralph Stair, George ReynoldsPublisher:Cengage LearningFundamentals of Information SystemsComputer ScienceISBN:9781337097536Author:Ralph Stair, George ReynoldsPublisher:Cengage Learning
- Enhanced Discovering Computers 2017 (Shelly Cashm...Computer ScienceISBN:9781305657458Author:Misty E. Vermaat, Susan L. Sebok, Steven M. Freund, Mark Frydenberg, Jennifer T. CampbellPublisher:Cengage LearningDatabase Systems: Design, Implementation, & Manag...Computer ScienceISBN:9781305627482Author:Carlos Coronel, Steven MorrisPublisher:Cengage LearningDatabase Systems: Design, Implementation, & Manag...Computer ScienceISBN:9781285196145Author:Steven, Steven Morris, Carlos Coronel, Carlos, Coronel, Carlos; Morris, Carlos Coronel and Steven Morris, Carlos Coronel; Steven Morris, Steven Morris; Carlos CoronelPublisher:Cengage Learning

Principles of Information Systems (MindTap Course...
Computer Science
ISBN:9781305971776
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Fundamentals of Information Systems
Computer Science
ISBN:9781305082168
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Fundamentals of Information Systems
Computer Science
ISBN:9781337097536
Author:Ralph Stair, George Reynolds
Publisher:Cengage Learning

Enhanced Discovering Computers 2017 (Shelly Cashm...
Computer Science
ISBN:9781305657458
Author:Misty E. Vermaat, Susan L. Sebok, Steven M. Freund, Mark Frydenberg, Jennifer T. Campbell
Publisher:Cengage Learning

Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781305627482
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning

Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781285196145
Author:Steven, Steven Morris, Carlos Coronel, Carlos, Coronel, Carlos; Morris, Carlos Coronel and Steven Morris, Carlos Coronel; Steven Morris, Steven Morris; Carlos Coronel
Publisher:Cengage Learning