Lecture Transcripts
docx
keyboard_arrow_up
School
University of California, San Diego *
*We aren’t endorsed by this school
Course
170219
Subject
Medicine
Date
Jan 9, 2024
Type
docx
Pages
33
Uploaded by MinisterExplorationViper45
Lecture 1 Transcript
Welcome to this first lecture
in the course on clinical cams stream.
In this lecture, I'm going to cover
the topics in chapter one
where we're looking at a general overview
of what clinical chemistry is,
including molecular diagnostics and
placing it in the field of
Laboratory Medicine in general.
So first of all, what is chemistry?
Clinical chemistry is really
the area of chemistry
that's concerned are focused
on the analysis of bodily fluids.
With the goal of making
an assessment for
diagnostic or therapeutic purposes.
A medical or clinical laboratory then
is the place where these tests
are taking place?
January. It's clinical pathology lab
where there's going to be other test
being carried out as well.
But the clinical camps she tests,
we'll be looking at.
The specimen is obtained from
patients in order to aid in the diagnosis,
treatment, and prevention of disease.
In the textbook, they give
the following definition
for laboratory medicine.
It's a
component of
laboratory science
that's involved in the selection,
provision and interpretation of
diagnostic testing of individuals.
It's not so different from the one on
the previous page that I really like how the
first deformations I provided
made it very patient centric rather
than just focusing on
the specimen because there really
is ultimately about the patient.
And the clinical chemistry in the textbook is
defined as the largest sub-discipline
of Laboratory Medicine.
Which is true.
One of the other major disciplines
would be clinical microbiology,
or they're looking at bacteria
and their interactions with antibiotics.
And it is a multidisciplinary
fields for sure,
including hematology,
immunology,
clinical biochemistry, and many others.
Laboratory testing as defined in the text.
And I'll just point out this is
at the start of the chapter.
So each chapter shows
some key words and definitions.
If you are looking for those when
you're going through your Textbook review.
Anyway, here it shows
laboratory testing is a process conducted in
a clinical laboratory to rule
in or rule out diagnosis.
Or maybe to select and monitor
disease treatment to provide a prognosis,
to screen for disease,
or to
determine the severity of
and monitor a physiological disturbance.
So typical tests might be
blood tests shown here where your doctor may
order the screen just to see that
there's nothing abnormal going on even though
you aren't complaining of a any symptoms.
Or they might be ordering this
because you are complaining or
something and they want to just see if
could be a problem with
your kidney or maybe you're low in for high,
high level of urea and your pledge
or any number of things.
So here I just show
some routine,
some some common routine blood tests that
are carried out and we have
the complete blood count
where we look at red.
Cells, white blood cells,
hemoglobin, hematocrit,
platelets are the basic metabolic panel
where we have well,
that's just saved BMP,
electrolytes, calcium,
glucose, sodium, etcetera.
Comprehensive metabolic
panel where in addition
to these elements and electrolytes,
ions and gases,
you add in some other proteins and enzymes.
Lipid panel that lets get our cholesterol.
And if they're concerned about your liver,
kidney liver panel and
similarly with the February.
So here I show a typical example of
a test that you
might receive from your screen.
So this is a basic metabolic panel.
And here we show, for example,
the glucose result is
78 and the units
are milligrams per deciliter.
And then they show the reference
interval of what's considered
an acceptable range values.
So here we see glucose 78 fits
nicely in the range of 65 to 99, no problem.
However, if we go down here to sodium,
we see this is in bold and there
are some marking in column.
Flag has been flagged as low.
So when 32 millimoles
per liter is in fact a little
lower than they accept
reference interval or
standard reference interval
for most patients.
And again for chlorine,
they show the same thing.
So that's quite typical that if
there is a value that
comes out of a result
that comes out that falls
outside of the reference interval,
it will be flagged,
which drives the clinician's attention
to something that may need to be addressed,
whether it's measured, again,
monitoring every few months or it fits may
be indicative of something more
serious.
And so that brings
us to why do the tests, why do test?
It is not only just to
figure out what the value is,
but I can't be as I just stated
to confirm or a
clinical suspicion
if after the physical exam,
Dr. suspecting that maybe
you have some kidney problems going on.
They might be able to make
a diagnosis based on
those test results or could be to
exclude that just to make sure they
might not think is
that it might just be
sure that it isn't.
And so they'll run the tests.
It could be for assisting
in treatment selection,
for example, if you have the brca gene,
My first test for that.
If the brca gene and if you do it
might then determine which type
of treatment is best
suited to your type of breast cancer.
Or it might also tell you
then your prognosis and might be
different depending on whether you
test for this gene or you don't.
And we also test in order to
screen for disease in the absence
of any clinical signs or symptoms.
That's typically was done at our
annual physical and for
establishing and monitoring
this very day severity
ever physiological disturbance.
So they'll history.
So interesting to me
that clinical Can she really
dates back to ancient Egypt.
Ancient Egypt really were
already than doctors were noting
that the urine of a certain group
of patients who are feeling ill will sweet.
And it was really the early diagnosis of
diabetes before we knew what we know today,
they would actually taste it.
And in some cases it anyway,
I think there were cases where they
noted the ants were drawn to
this would be like an ant test.
But of course today we have
other ways of measuring that,
that glucose, which would have
made it taste sweet in the year.
Anyhow, the first clinical labs
in the US will not
established until 1895 course Kwame.
There were some labs earlier in Europe,
but it
wasn't really
a system that was
being used across the board yet.
In fact, early on that many doctors who felt
like the chemistry didn't
belong in the field of medicine,
that they
should just be able
to assess the patient
through their physical exams and questioning.
And eventually it gained more acceptance.
And during World War one,
women were recruited to work in
the labs because the men were off at war
and it actually became one
of the fields that
was really female dominant.
At that early deaths were women get a job in
medicine and science if
they were interested in these topics,
subjects woman then otherwise couldn't.
Now today of course it's,
it's very m equal,
much more equal in terms of
genders representation.
But it's interesting to me
that it was very female dominant role,
the dominant role field
in the early diss any app.
As time progresses,
new our diagnostic methodology is
being introduced and I
will discuss that also on the next slide.
But just like for example,
it was not until the seventies
that
immunochemical diagnostics were introduced.
Allergies and antibodies,
as well as even more recently in 1987,
molecular diagnostics, for example,
the PCR test that's being
used for coronavirus testing.
So what does define
the
boundaries of clinical chemistry then?
Really, where is the limit?
Well, basically, it's, requires advances in
technology and in our understanding of
what a change in
a certain analyte will mean
and how sensitively we can measure it.
Say in the forties and fifties,
there were some graded masses
and spectrophotometry,
which we're going to cover in
the next set of lectures.
Next week, electrochemistry
and Chromatography,
we're also going to cover
in the future lectures.
And then in the 19 seventies, as I said,
I mean a chemical techniques
were developed and then approved by the FDA.
In eighties, mass spectrometer
came on the scene.
And that made for
very sensitive testing of
urine, like drug testing.
Automation of the systems
enabled high throughput
screenings and measurements.
So you could look at,
instead of a few 100 patients
per day, thousands of patients.
And miniaturization
really enabled the point-of-
care
testing that is starting to be developed now.
And molecular diagnostics where we use
nucleic acids, DNA or RNA.
Amplification techniques emerge to
study infectious diseases or act or
soldiers or even like
the coronavirus flourishes talking about.
Okay, so
how's clinical chemistry practiced?
What are the functions
of the laboratory professional?
Is it just to run the test?
Now, there's a lot of
other aspects to what
would be done by the clinical games,
including the development and validation and
new lab
tests in
order to meet clinical needs,
unless has never been more
prevalent than it is today.
With the arrival of the coronavirus,
there's been a lot of labs,
tests being developed called lab
developed tests were developed
in the Glenbow lab that are then
getting FDA approval emerge be used.
So they all clinical cams
would also be looking at evaluating in
characterizing
the analytical and
clinical performance tests.
It would be presenting results to clinicians.
They might be providing advice
on selection interpretations, consulting.
It could be determining cost-effectiveness
and intrinsic value
of
a tests might participate
in testing algorithms and kind of lines.
Definitely have to ensure
compliance with regulatory requirements.
Participate in quality
assurance and improvement
of web servers and teach and train.
Future generations of lab specialists,
as well as some may
participate in basic or clinical research.
Now, another topic that
brought up in this chapter is the importance
of understanding the ethical issues that
could arise and just being aware of those.
Of course, we all know confidentiality.
Patient medical information is foremost,
most important ethical issue
that you're going to encounter.
And with the advent
of these genetic testing that will match and
that is also really important concern
with privacy.
So confidentiality is to
apply to all of the test results.
Allocation of resources should be used
effectively and codes of conduct,
publishing issues and
other complex dovish conflicts
of interests that might get in
the way of providing appropriate testing.
So there are some issues
covered in the chapter,
like some examples of what you
should be aware of
and keeping an eye out for.
But basically, you know,
you want to make sure that
you're treating everything with
confidentiality and that you're ensuring
that the results are accurate, ends.
Going to be helpful to
the patient's outcome ultimately.
Ok, So the future,
what's going to happen is,
well, first of all,
clinical chemistry is known to play
a central role in providing
top quality care to patients.
Now, you'll hear it often send
that up to 70% of diagnoses
are dependent on test results
that are coming out is clinical labs.
It continues to grow as a field, of course,
with improved assays and
advances in technology.
As I mentioned a couple times,
like the coded 19
is a perfect example of this.
And how we see new tests
being developed all the time.
And if you want to work in chemistry,
you really will find that
you are a person who enjoys problem-solving.
That you care about, patient care,
that you are able to.
You're a person who pays attention to detail,
that you enjoy analog chemistry
and consulting perhaps if he gets
to that level as well
as being able to comply with
ethical and regulatory mandates.
So here's just a little video to show
you what it looks like inside a lap.
Yeah.
Or your neck,
your habits candidates to see the instruments
that you can see,
there's going to be important aspects.
Specimen before we
test specimen, during testing.
Different types of tests that are being run.
Different payment that's being is
testing. If any black but I'll be there.
Yes. So you name it.
I can discovery are tasks that limit
let's align need isn't. It sets a bar?
Detectives medical laboratories account
7% of men diagnose
and treat your adding
Patel?
Yes. Yes. I find out where am I collected.
Let me everyone again.
I mean, because Almunia or the Kenyan,
unless the Lackner, I discovered
your grandfather's practical needs are
really not very successfully recovered.
It is rewarding everyday
now and then I made a equal slices.
Okay? Oops. Yeah. Okay, so I just
S1 lady with this slide
on the importance of clinical chemistry
and laboratory medicine.
So I, I just,
as I said many times throughout this lecture,
there's never been a time where
clinical laboratory medicine has been
more prevalent in our conversations
about what's going on in the world.
I mean, it is the most,
one of
the most important aspects of our fight
against the pandemic
that we're facing right now.
So I just clipped out
a couple screenshots from
different websites showing this.
Here you see the headline laboratories
on the front
lines battling covet.
And it's true, I mean, really boom.
A lot of what's been going on in
the last six months is
the development of new laboratory testing.
Faster, more accurate testing.
And not only testing for the virus,
but also testing for the antibodies.
For example, in this blog post
on American clinical
laboratory Association
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
badge,
the fight is in S that are looking
at blood transfusions from plasma donors
should
recover from Cove it
and they have to
be able to analyze
the antibodies that are in there.
And now they're working with and of course,
the active virus we need to know to
and how long we have an active virus,
how long we have to stay in isolation.
And these are all
really important tests that are being
done and develops as we speak for.
So
you can see it's
really an important field.
And it's also
a really continuously evolving field.
And I hope that you
enjoy learning more about
it as we go through this course.
Lecture 2 Part 1 Transcript
Welcome back to the second lecture
in this first week,
where we're going to be
covering the topics in chapter two,
analytical and clinical evaluation methods.
Now this chapter really is
about the language of statistics
that are relevant and
necessary for clinical chemistry.
And although I know statistics
is not often people's favorite subject,
it really is critical in terms of being able
to talk
about clinical chemistry in a meaningful way.
Because it's really a lot.
All of it has to do with
the the reported values
that we're going to be
observing and how they fit
into the population statistics and
their meaning in the relation to
reference intervals that had been determined
for a sample population.
So we're going to need to understand
the statistics for
just that type of discussion,
as well as the fact
that and the clinical lab,
you are going to be
comparing assays Constantine,
whether it's from one lab to
another or new instrument
that's come into the lab or even for
internal quality control purposes.
We need to understand
a little bit about that too,
just to have a better understanding
of clinical chemistry in general.
So in the texts they go
through this flowchart of how we
select passe best suited for our goals.
So if you may be looking at
a new tests or
diagnostic that is being developed in a lab,
or perhaps just a
selection of one that's
available commercially.
But you'll be doing
this in response to having established
the need for one perfect example
lately is this need to test
for by rule of the sars coffee to virus.
And a number of labs had
been developing their own testing for that.
So you need to define quality goal,
how sensitive doesn't need to be mz,
and what's the best method to adjust
this question is going
to be PCR in this case.
And, or some other technique.
We're going to need to have a good method for
verifying the results that we're getting and
validating the results for a new test.
And there's going to be
regulatory bodies who helped with
this preventing reference standards or either
expected levels
of reproducibility and repeatability.
This need to be shown in order for
a test to be considered valid.
So how we implement it
and then moving into routine analysis
is also regulated as
well as the need for performing
quality control practices throughout
the duration of
the instruments
life in the lab,
and proper reporting of results.
So as I mentioned, there's
some regulatory bodies who are
involved with helping in
this situation in order to ensure that.
You know, if you have your blood drawn
here in San Diego,
you're going to get the result,
but the same result as if you have your blood
drawn in Houston, Texas.
And so we have
the clinical laboratory standards
has to satisfy,
as well as the International
Organization for Standardization,
ISO and ice.
Although CLSI has committed
strictly to the clinical lab standards,
ISO has standards across any,
in every topic and discipline,
including IOUs to recently
for looking up the proper protocol
for how
to measure the nickel leaching out
of nitinol cardiovascular stance, an example.
So we want to make sure that any
group who's trying to answer the question,
how safe is my device is using
the same protocols so that
the comparison is meaningful
and safe for the patient.
Really ultimately, excuse
me, are the consumers.
And here, domestically we
have the FDA,
CMS and the CDC,
or the CLIA being
what you'll hear about most often,
the Clinical Laboratory
Improvement Amendments
that was introduced in 19 E.
That's used to really ensure that
all labs isn't the same protocols.
Okay. So just in
terms of validating asset,
we want to confirm by examination and
provision of objective evidence
through a defined process.
Process, where this comes from at
the particular requirements for
specific intended use can be fulfilled.
Now, just to give you a view of the CLSI,
countless times each day,
laboratory tests are used to make
critical decisions about patient care.
But laboratory testing is not
useful in treating patients unless
quality
and dots are used to ensure
accuracy, reliability, and timeliness.
For 50 years, CLSI has been setting
the global standard for quality
and medical laboratory testing,
developing an upholding best practices that
drive quality test results
and improve health care.
Clsi has
more than 200 published standards,
guidelines,
and companion products being used by
thousands of laboratories in
every corner of the world.
Clsi work is used by health care providers,
drug and medical device manufacturers,
and government regulatory agencies
to achieve the highest quality results.
The benefits are substantial.
And in terms of improvement of quality,
of accreditation preparedness
for those hospitals and
healthcare systems for industry in
terms of their submissions
to regulatory authorities,
for government agencies, in terms of
the alleviating the need
to write new legislation and
law and making sure that
high-quality laboratory services are
provided to
healthcare institutions and others.
And through its global
health partnerships program,
CLSI is providing quality
and standards training to
parts of the world where
these resources are not readily available.
By partnering with laboratories in
these countries and
providing laboratory mentors.
They can improve the diagnosis of
diseases such as HIV, aids,
malaria, and tuberculosis,
helping to deliver lifesaving treatment.
Clsi, improving the quality, safety,
and efficiency of
medical laboratory testing around the world,
leading to better patient care
and longer and healthier lives.
For more information, please visit
our website at CLSI.org.
Whoops. So I encourage
you to visit
these websites and check out what kind of
information you can find them yourselves.
But now into the statistics,
some of the terms we're
going to be covering in this part of
the lecture include frequency distributions,
populations, parameters,
statistics, random sampling
and probability distributions.
And on the right here I
show an example that's given in
the textbook of a histogram for
gamma glued ML
transferase enzyme that's
measured for looking at
the health or disease state of your liver.
So it just gives you an idea of
how we're going to be using
statistics in clinical chemistry.
Collecting data from
a sample of patients to see
what we're going to define
as our normal range of values,
excuse me, for the population of interests.
So getting a little bit more into that here,
we define this as a frequency distribution,
which is a graphical device for
displaying this large set
of lab test results.
And of course, it is a histogram.
Intervals are defined.
That makes sense for
the measurements that you have
and
these intervals will get smaller,
of course, the more number
of samples that you have.
So here we have the concentration and press
the X axis and the frequency on the y.
So if you look at the height of
each bar tells you
how many observations were
made within this interval.
And this graph represents
a 100 healthy 20 to 29 year old men.
The range of values goes from five to 65.
And if what we
really use these for us when we
measure the amount of Juju TNF patient,
we want to be able to compare that
to the population in general.
Where does that person fall in the range?
Are they within the normal range?
Are they on the outskirts as that indicative
of need for medical intervention?
Okay. So often we're
going to talk about the percentile,
what percentile the patient might be at,
just as we do with our grades at school.
So for example, if you want to know.
You have the 90th percentile.
If you're in
the 90th percentile on your exam,
that means you are
I have a score that's higher than
90% of everyone else who's taking the test.
So in the textbook, they give
you this formula
for percentile non-
parametric approach.
But this doesn't give
you that percent, percentile,
as we're used to
thinking about it gives you more
the percentile number rank
along, along the curve.
For looking at the percent.
I show you this formula here,
which is pounds,
the resources on the internet.
But I like this
one of the best because it shows you if
you
have wrecked your data
as you would make this
frequency distribution,
you often will have
multiple values with the same ranking.
And so that's shown here.
So you have a 100 times the number
of observations below the rank of
your interests plus half
of the number of
the ranks equal to your physician.
So suppose we choose,
we want you to look at rank of
54.5 and you divide
that by the total number of ranks.
So that gives you an,
a one measure of
percentile rank for
this type of distribution.
Okay? A population that is
defined as the complete set of
observations for
a particular procedure
under specific conditions.
But usually we can't actually
measure the entire population.
So what
we're gonna do is
take a sample of that population.
We call this.
The sample is a subgroup selected
from the population to be representative
of the target population.
And baby boom,
what we mean by representative as
sometimes you're going to find that
actually it doesn't make sense to
sample men and women
in the same
grouping because
their normal ranges will differ.
And so you might have SFPE
for women a certain,
a certain age even, or
other ways of defining
who we use for our target sample.
Anyways, if your sample is large enough,
then you start to approach a population.
And this you can also
see in the graph here on the right,
where we no longer
have the larger intervals
are intervals are so small now that
we just have a continuous curve.
And you can extract
probabilities to statistics,
statistics from
this frequency distribution curve.
So for example, if you want to know what's
the probability that you're
gonna measure and observe a patient with a G,
G T value that is higher
than a certain value, say 58 IQ.
And measure the area under the curve
that gives you the probability,
in this case above 58.
The chances are quite small,
but you're going to measure this in.
It's 0.05 or 5%.
Or if you want to figure out
what's the probability of
measuring value within the 90% interval,
then we can determine that cutoffs.
And in this case, it would be 958 gives
you a probability of
90% that any value you take is going to.
But in this range here, okay,
So parameters then are defined
as descriptive measures of the population.
So they do depend on
the fact that you have
a normal type of distribution as shown here.
And the central location of
this set of data
is going to be called the mean.
So mu is the mean.
Or you take a sum of all the values
observed and divide it
by the total number of values.
In a normal distribution, the median,
which is defined as the 50th percentile,
will also be mu equivalent.
And what we're interested in standing for
our statistics is the variance of this value.
So how likely are we to get a value that
differs widely from our population mean?
And that's where variance comes in.
So sigma squared,
which is equal to the sum of
differences of from the average
for Europe's your value divided by
the total number of observations.
And variance sigma squared.
Sigma itself
represents the standard deviation,
just say, in one direction.
So now statistics,
on the other hand are slightly different,
but similar because they don't
measure the whole population.
These are descriptive measures that
are specific to
the sample,
the sample size that we hear.
So we have our average or sample mean.
That is basically the same formula as
the mean of the population
shown on the previous slide.
But now we've got x m and is
still the sum of all the observations we've
made in the sample now divided by
the total number of observations.
And the median are
still going to be the 50th percentile.
It might not be
completely normal distribution.
So this may now start to
differ from the sample mean.
And of course, those standard deviation
is essentially equivalent to the sigma
on the previous page where we
have
the square root of the sum of squares of
the differences divided by
the total number of observations.
And they also include here
the statistic could coefficient of variance,
where we are going to look
at not only the standard deviation,
but the standard deviation
in comparison to the mean.
So we get an idea of the,
of the actual us
spread of the values.
So for example, if you
had a series of observations,
I'm an amount of 123.
There is only a spread of three, but the,
the number three is three
times the same as number one.
Whereas if you had a hundred and one hundred,
two hundred and
three,
you still have a spread of three.
But in comparison to the average
is not as big of
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
a difference.
So it's important sometimes to see
this coefficient of variance as
opposed to just the standard deviation.
Okay? So just a note
that they define a statistic them as
any value that's calculated
from the observations in the sample in
order to estimate a particular characteristic
of a target population.
So what is the difference between parameters?
Statistics. Hopefully, it's become clear
that parameters
apply to the whole population,
whereas statistics applied to
the sample of the population,
where we have the parameter being
mu and this statistic being x bar or x m,
or the average calculated.
And this is sample representative
of
the total population
through random selection.
Okay, so Gaussian probability distribution
then refers to this normal probability
that we are talking about already.
But in this case,
we want to start thinking about
the error that's associated
with measurements taken
in a normal distribution,
where the error is defined as
your observation minus the average.
And that's going to also
have a normal distribution.
And it's going to obey
the central limit theorem
or central limit effect,
where if you take enough observations,
the error will tend to 0.
So it is completely characterized by
the mean and the variance is
often written as N normal normal distribution
with mu comma sigma square.
And then we talk
about
the standard Gaussian variable,
which is basically going to be a measure
of this air divided
by the standard deviation,
where your population then is described as
your distribution is described then as
n as 0 comma one,
as we approach theme for total population.
And, and the probability
then of finding any value
within two standard deviations of
the average is going to be 90.9544.
Or this basically is how we define
our 95% confidence interval.
Okay, so the student t
probability distribution
as going to be the equivalent
of that Z-statistic
except now using the sample population again.
So we have T is equal to x minus
mu divided by the standard deviation.
And that's what rights from the textbook.
But in most places you'll
see it written this way.
X minus mu divided by the standard deviation,
divided by square root of m,
which is the ST
divided by square root of n as
the standard error actually
as opposed to standard deviation.
So if a sample size is small,
you'll get a greater dispersion with
heavier tails and the distribution.
And if the sample size
is greater than or equal to 30,
the student t will approach
a Gaussian variable.
Again with the central limit theorem
so that it converges towards
this Gaussian distribution and it'll
be tending toward a difference of two.
Student t-tests are used
for significance testing and for
confidence intervals
where we often will use that
95% or we said
two sigma on the previous page.
But in samples they
sometimes burgers to alpha.
And once you know
your t value and
you know your degrees of freedom,
which is defined as. Minus one.
You can look up a p-value for your sample.
And the p-value is what's often use to test
a hypothesis for whether
you're going to accept or reject it.
And in our case, this is
often going to be looking at whether
we accept or reject the fact that our sample
mean is going to be equal
to the population mean.
So for example,
or for the mean of another asset.
So for example, I just took this
from a paper shown here.
If you want to look it up later,
where they wanted to perform
an internal quality control
on existing assay.
For example, Lydia trophic hormone, hormone.
And the null hypothesis
is going to be defined as yes,
the X mean we're expecting will be
the same as the mean we've had in the past,
where the significance set
to P less than 0.05.
So if it falls outside
of the 90% confidence interval,
basically we're saying that we're going to
reject for the manufactures,
Yeah, we're going to reject
the null hypothesis.
But that they want.
So they show the results
here where for high concentrations
of the hormone,
they found the average should be well within
the acceptable range for
accepting the null hypothesis
because p is greater than 0.05,
so there's no need to.
So this appears to be equivalent basically.
And here, when they look at the mean value
measured with the new essay
compared to the predetermined one,
it still seems reasonably close together.
However, when you go
through this statistical analysis,
your t-value of 2.3
plus
your degrees of freedom,
you come out with a p-value of 0.027,
which is in fact
less
than the threshold that was set.
So there is
some significant difference there.
What is the possibility
of rejecting the null hypothesis?
So they're gonna have to look into
that
further and see if this assay
is still valid at these concentrations.
Now, most biological samples
are not parametric.
So we're going to be looking at
non-parametric samples.
And that basically means
that a non-normal distribution,
they might be skewed
or distribution free.
So I've already shown you with
the G GET samples that that is,
that, that is skewed.
And here we have a couple graphs showing
billy Reuben for
population according to gender.
So men and women and E because again,
we have non-normal distribution
skewed to higher concentrations.
And so for the descriptors for
non-
parametric populations,
we're going to talk more about
the medium instead of
the average or the mean.
But still, the 50th
percentile will give us that.
And the lower and upper percentiles, often,
we're going to be talking again
about the fifth 95th.
As again measure of spread.
Mann-whitney test can be used.
The equivalent of the t-tests
for these non-parametric samples.
Okay, so we're going to just price here.
And actually let me
finish this slide and then I'm
just going to take
a pause and move
into the second part of chapter.
Okay, so the validity of analytic assays is,
is an important part of clinical
chemistry.
And these are some of the
concepts that they just
introduced in the chapter that are
important for you to be aware of.
And we'll see them coming up,
popping up here and there throughout
the courses we're looking at
results from different studies.
Okay? Some of the concepts.
This is similar to table 2.1,
I believe in the textbook.
Yes, but I've just added in
another column and a couple other terms
that you'll
find in this chapter as well.
So we talk about
the terminus of a measurement
that's the agreement of
the mean to the true value.
And this will give us a measure of bias.
Accuracy is the agreement of
a single measurement with the true value.
And this gives us a measure
of uncentered certainty.
So precision will be
a function of repeatability and
reproducibility are repeatability is
under the same conditions and
reproducibility is under conditions
such as different time, different operator.
And here we're going to be measuring this
through the standard deviation
or coefficient of variance,
is defined by the
regulatory bodies that you're going to need
at least 20 observations or more to
make that meaningful measurement.
And we do have to be aware of how
calibration or instruments stability
can affect these results.
Linearity. We often want to
be verifying that we're working
within the linear range of a relationship
between our observable AMS,
the concentration.
So whether it's looking at absorbance as
a function of concentration
or an electrochemical result.
We want to know that we can,
can be looking through
linear range of response.
That as the concentration increases,
that observation will increase
in a linear fashion.
So that's usually measured
through a dilution series.
And then we have limit of detection or 11 and
quantification to instruments often
talk about the limited detection.
That's just how low of
a concentration they're able to detect.
But for
laboratory purposes where we're looking at,
at a patient value,
we really want to know
the lower limit of quantification because
that's going to be the lowest number
on still this linear range
of the calibration curve.
And that's important if we're trying
to quantify the numbers.
So and we have sensitivity and specificity,
which are important concepts that you
also have been hearing
about with the coronavirus testing.
How sensitive is it has specific is
it in the sensitivity is measure
of true positives in
comparison to the total number
of positives measured.
So true positives plus
false negatives, which means also.
And the specificity will be looking
at the ability of a test to really
discern the target analyte
from another interfering substance.
So we want to know if it's,
if we get a negative result
is true negative or
is it a false positive?
So we're going to look at true negatives
compared to all negatives.
And that gives us a measure of specificity.
Okay? Pause here and then I'm going to
resale and the next set of
slides in a new recording
just for space purposes.
Thank you all.
And I type and I'll be right back.
Lecture 2 Part 2 Transcript
Okay, well, welcome back
for the second part of
chapter two where we're
going to talk about assayed comparisons
and diagnostic accuracy.
So assays are done in different labs,
aired on new instruments.
Or there will be times when you have
good quality control on a single instrument.
And we're comparing the data.
If I've assays, you will
want to use and mystics as well.
So in the textbook they describe two ways
that statistics are
used for comparing assays.
And these are the difference plot,
planned amen deterrence plot
and the regression analysis,
typically the Deming analysis.
So why
not use a simple two-sample t-tests
or ordinary linear regression?
While the reasons are
that even if your t-test turns
out to get the same average
and standard deviation,
it doesn't necessarily tell
you if you have agreement across
the range of concentrations
that you're interested in,
it just tells you the overall they agree.
Bland-
altman while adjust this by applying
the differences against the average
of the two methods.
And for the linear regression,
linear regression doesn't usually take into
account the error if the x axis.
So the Deming regression
is a modification that
will take this and copy is of course you're
gonna have error and both measurements.
So speaking of error, that takes,
but then goes on to describe
the basic error model
when calculating differences.
So you're gonna have your measured value,
your target value triggering where in
the previous section we are looking
at x minus mu equals the error.
So this is just a rearrangement
of that equation to say
x equals the true value,
which will be like mu plus
error, random error.
However,
in instrumentation you're
also going to
have after calibration bias.
And you may also have
some additional random bias
plus this random error
that will always be present.
And the total error then is
actin written simply as
the total bys plus two standard deviations.
Now the bias in this
case we're talking about
total error usually refers
to a calibration brat bias,
which itself is a systematic error
that can be constant over
I'll plants all conditions
or may be a function
of analyte concentration.
And in the regression,
you can also look at
the intercept and the deviation of the slope
from unity when
comparing routine measurements
to see how well they compare.
Submerging the difference,
then you're going to also see
the difference in the heirs
of the two method where
the true value is going to be eliminated.
And the CLSI guidance
on this is that you should have
40 samples around and duplicate by
each methods to make an affair assessment.
Okay, so here's a view
of the Bland-Altman plot as
you'll find in your textbook as well.
And you can see on And one axis we have
the difference between the measurements
of the two systems compared to
the average of
the averages of the two systems.
Where you can see the scatter here.
That is a function of concentration.
I mean, it's more
or less homogenous that lower contrary,
higher concentrations, we see a difference.
And so this is an indicator of how the,
how the sets, this system
will give you more information
than just a simple t-test.
And you can also do
the graph in a different way where you have
the relative difference in order to
eliminate that issue against
the averages as well.
The second method that is used for comparing
assays as the Deming regression.
So there's a type of linear regression
that takes into account, as I said,
the air in both the X and Y observation,
or the two x one and x two.
And in comparison to
the ordinary least squares,
which is also described but not the
best method for this comparison.
So when you do linear regression,
you may remember from your math you get
the Y equals MX plus B.
So in this case with x two is going to have
some slope value associated and an intercept.
So the slope an interceptor
are used to give us a measure of
constant differences and calibration error
as well as proportional deviation.
Now, I personally don't
know what these error bars are all about.
I can't explain them to
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
you.
So if anyone else does now I'm
wants to share on the discussion
board that will be great.
I would also argue
some bonus points and otherwise,
I found this graph showing
how Deming regression
differs from linear regression.
On the website shown below, can't read here,
but you'll be able to see it in
the slides. You download them.
Okay, so here we have
the red line being
the Deming regression and then
the black line being the linear regression.
And the red gives you also the error bounds
that are for both x and y.
So using these two types of
assays you can or statistical,
and as you can get a lot
of information about comparative,
comparing two assets, what ships important?
A few other notes that they mentioned
and I don't go into all the detail on
the regressions were not
going to actually be performing
any regressions in this chapter.
But I want you to be aware of
those systems that are used.
Okay, so the other notes
that are pointed out in
the textbook that we should be
aware of are outliers.
Outliers are often rejected
based on some cutoff that is predetermined,
whether they're a distance of
three or four standard deviations
away from the regression line in this case.
But you should still investigate
their reason in case they occur.
Again, correlation
coefficient r is defined in
the textbook as a relative indicator of
the amount of dispersion
around the regression line.
And they describe the testing
for linearity is important.
We do make that assumption.
We are looking at the concept
of linear regressions.
So you can perform or runs tests to see
if the positive and
negative deviations
are randomly distributed.
In order to affirm that you
do can make this linear assumption.
And nonparametric regression can
also be done using
a difference mathematical algorithm.
The passing bad luck.
Again, this is just so
you know
that you do have to make
these considerations said it
differently for non-parametric systems.
But in the era we're generally
looking at parametric distribution.
Okay, so the interpretation of
systematic differences,
once you've calculated,
let's say the intercept and
slope from this regression can be
done using a t-test where you compare
the intercept is 0 and the slope to one.
And that will give you more information
again about how these error,
what the error for,
how the error differs from
sample to sample from ASA, ASA.
Furthermore, they bake a few notes
about how this is used in practice.
One being that when you're monitoring
serial results for patients
that are undergoing continued treatment,
you need to also introduce error that
will arise from Within Subject
Variation.
So that patient might have some variation
from day to day
when you're taking
these measurements.
And there'll be additional piano local error
coming into play as well.
And furthermore, when looking at assays,
you need to be
aware of this concept of traceability.
So there's going to be a chain of
comparisons that occurs,
leading all the way up to the top where
we have a known reference value,
probably determined
by these governing bodies.
And then you're going to have
selected measurements that are
performs at the intermediate level,
perhaps here, calibrations
in your laboratory.
And then you'll be
doing the routine measurements.
And you're going to want to have
those compared to
statistics run and reported
for each of those levels.
And
then we have this uncertainty concept.
They bring up that not only
can you measure uncertainty
directly from comparisons of
assays as we've just been talking about.
But it can also be
judged indirectly from analysis
of intuitive visual error and
using the law of error propagation.
Okay, so the last part,
we're going to talk about diagnostic accuracy
of laboratory tests.
And here we have,
we're going to look at if you
have it tests and you are
testing just for whether
somebody has the presence
of a disease or not.
Perfect example being coronavirus.
While lets go of antibodies in this case,
do they have antibiotics or
do they not IGM antibodies?
So what are the
what's the likelihood that you're, well,
how accurate your test
is will be determined
by the following equations.
So first you make a table
of all the test results and put.
The true positives are
the permutation does have
the disease and you have detected
or the duty of
the antibiotics we have detected,
then you have false positives or they still
show a positive result for having antibodies,
but the action happened at disease.
False negative. Where
you have
a negative results but they have
actually have the disease
and true negative where
you see no response
and they have not had the duties.
So dang that gnostic accuracy is the sum of
the true negatives and true positives
divided by all of the measurements taken.
Specificity will be
the true negatives divided
by the total number of negatives,
whether they're
false positives or true negatives.
And sensitivity is the true positives
divided by the positives.
So question, Is it better?
I mean, specificity or sensitivity,
a better measure of
Jim correctly detecting disease.
Gave me a second to think about it.
What tells you more accurately
if you're going to detect the disease.
While as sensitivity, because
in this case you want to know
who has the disease.
So looking at the true positives
in comparison to the
hauled the positive results,
we'll give you an indicator of how
sensitive test actually will be.
And we'll go on to this
example in this chapter.
Deep vein thrombosis.
However, we're going to talk
about this next chapter.
So deep vein thrombosis
can be analyzed by looking
at the D-dimer quantity in the patients.
And so the D-dimer tests,
it's for protein fragment that
will come from blood clots
dissolved into the body.
So this is a blood test that's taking
them M and the S.
The frequency distribution for
patients who don't have
deep vein thrombosis looks
something like this where
they're going to have
a low concentration of
D Dang Ran they're glad at anytime.
And the patients who do have
deep vein thrombosis show
higher concentration
of the D-dimer in their blood.
And
in this case,
it's called a dichotomized
index tests because we are going
to have a cut-
off that we set arbitrarily.
So we're going to have to think
about where to put this cutoff in
order to detect them highest number of
true positives and true negatives.
In this case, we're more
interested in making sure
we get the true positives.
So they set the cutoff relatively low here,
500 nanograms per mil.
And you see now the sensitivity is 97%.
So they're accurately detecting
the true positives 9, 7% of the time.
However, the specificity is only 37%,
which means they're going to get
a lot of hum.
False positives as
well.
It's the diagnostic accuracy
in this case is 50%.
This concept of a receiver
operating characteristic curve as
a means of determining
the accuracy of the test is then introduced,
where here we black sensitivity versus
one minus the specificity
at different cutoffs.
And in this case, here's
our 500 nanogram cutoff.
And then we're going to
hire Codapps is look at the graphs.
I can see the numbers.
Here. We had the, yeah,
2133 micrograms per liter.
And then down here is the dot for
the 5,435 micrograms per liter.
So you can see these examples
mark in the textbook,
or they just show the shifting of the cutoff.
And so you can see how many less,
how much less sensitive tests
is going to be at
these higher cutoffs, but more specific.
And the higher the area under the curve,
the more the area under the curve,
the more accurate it is.
And basically want to
know how much better it is than chance.
150%. Okay? So we're
going to come back to
this example in a minute,
but they then introduce
the posterior probabilities and odds ratios.
And I'm including P-values here because it's
a really important measure
that's used in statistics.
One looking at whether
or not a new essay or test is
statistically relevant or helping.
Showing that as that it's
improving what
you're currently doing.
And with statistical,
what's the word I'm looking for?
Significance.
Statistical significance.
Okay. So anyway,
the posterior disease probability
are often just stated.
The desired probability
is similar to sensitivity,
but instead of total positives
divided by all the positives,
now we have total positives divided
by total positives and false positives.
So we want to know the probability
of having a positive result
based on the fraction of
total pastors had positive results.
And they also show similarly
the probability for him than negative.
I'm result oz ratio,
on the other hand, is an alternative way
of expressing probability,
where we show that an
outcome will occur given
a particular exposure in
relation to not occurring.
So it's formulation here,
probability over one minus probability
to, for example,
if the probability of
getting a true positive is 80%,
the odds ratio is four.
And anything above one means you have
a
higher odds ratio.
It's a higher chance is going to happen.
And if is equal to one,
then
it's the same.
There's no association.
And if it's less than one, then you
have less odds of getting it.
Having any particular observable
or exposure in relation to not.
Okay, that p-value than
I did introduce a little bit earlier
where you want to look at
some data and decide whether you
can accept or reject the null hypothesis,
which in our case is often looking to see if
the average we measure
is the same as the reference value.
So the threshold is set to determine
if this observed difference provides
statistically
significant evidence against it.
Where we'll set alpha usually
to something like 0.05 or
0.01 or 0.001 if
we want to be particularly rigorous.
And that would be the case
if you're introducing
a new tests or treatment
and you want to show that it
really is different than
no treatment,
but you want to be especially rigorous.
Okay? So vector d dimer example.
In the normal diagnostic process,
the doctor will do
a patient history and
check for physical science,
including leg pain and swelling.
And then if there's still some uncertainty,
additional tests will be performed,
including this will be
done in a stepwise fashion majors,
We want to save time and
only if it's going to add that you.
So there was a study
done for this deep vein thrombosis where
a reference standard
was repeated compression,
ultrasonography.
Or 20% of patients had DVD.
Dvd. And then they added
the D-dimer essay to
ask if does this actually add value?
Does it make it easier for us to predict
who actually is a true positive?
Okay, so this is Table 2.4.
You're from your textbook and it
shows the other the other process.
The other observations,
I guess we'll call them that
are made in the basic model for
assessing whether somebody
has deep vein thrombosis.
So the presence of malignancy,
recent surgery, absence of like trauma,
vein distension, pain on walking,
swelling of the whole leg
and difference and calf circumference.
So in the basic model there's
no D-dimer test.
And then model two
includes the D-dimer tests.
So just an example of
the odds ratio does point out that there
are two times more likely
to have DVD if you have
pain in your leg with
the absence of any leg trauma.
That's true in both models,
22 times more likely.
And then the other thing
to note is the effect
of the variable recent surgery.
So if you've had recent surgery
of leg pain than
your odds ratio and the basic model
is still 1.6,
comes out to 1.6 or 60,
you're more likely to
have deep vein thrombosis.
But once you include the D-dimer than
that recent surgery number comes down to one.
So there showing there's
no association between
the pain and deep vein thrombosis
and if you've had recent surgery.
So here we show
the receiver operating characteristic curve
again where we're plotting
sensitivity and specificity.
And you can see that adding
the quantitative D-dimer assets or
the model moves the curve
from here up to here,
and as we talked about before,
are the higher it is towards the top left,
the more accurate the testis.
The accuracy is moved from 0.7 to,
up to 0.867, which has great.
Okay, so in the working pathway
then there is a working pathway where we
have a framework that
helps us to determine if we
should indeed include a new test.
Whether
it's going to benefit or
risks are caused the patient,
and they go through a number
of questions one should ask to
determine if it's going to be
helpful or potentially dangerous.
You need to anticipate
technical or analytical
capabilities of the test.
Identify
the unintended and
intended results of the tests.
Identify individuals in whom
the test effects are likely to occur,
anticipate any mechanisms through
which these effects will occur and
assess existing care in
target contexts are individuals,
as well as estimating expected timeframe in
which potential risks
and benefits might occur.
So that's something that's being
considered when introducing
a new diagnostic test and important.
So twice drummer and the second part,
diagnostic accuracy,
is an indication of
the frequency and type of errors.
Cohort design based on patients
suspected of having the disease is important.
They don't really go into
the details of that,
but they included a table in your textbook.
Just for interest's sake, you
won't be tested on that.
Measure of diagnostic
performance in relation to
the settings and other measures
is important to keep in mind.
And we're going to focus on quantification of
diagnostic accuracy and combinations of
indirect type decks tests that add value.
So
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
at this point,
we're finished with
this chapter on statistics.
Perhaps your mind has been blown
and whatnot. Another way.
But I'm glad that we've gotten
through this part of the
course and
we're going to talk a little bit
about statistics are still
in terms of diagnostic accuracy,
sensitivity, and selectivity all
the way through the course.
But hopefully this will give me a good base
for understanding that and
moving on to some other interesting topics
Lecture 3 Transcript (Ch. 5)
Okay, welcome to the third lecture in
this week on Chapter five, reference values.
I know we talked about this
in the previous chapter and
we're going to talk about them many,
many times because they are
very central to click chemistry.
And I think you're going to
find that there's quite a lot of
review in this chapter from
the things we discussed in chapter two.
So hopefully that will be helpful since there
was so much information
in the previous chapter.
And this one, I think you'll find it to be
shorter and easier to digest.
Now. In even the beginning
of the chapter where we discuss
how is a diagnosis made?
Something reject about in chapter two as
well when we're looking at workflow, the end.
So in practice, a decision is made,
the diagnosis is made,
and a medical decision is made
based on all the collected data.
That includes the medical interview,
physical exam, as well
as the medical laboratory test results.
And these results are always
compared with reference data.
So the decision-making is done by comparison.
And the data really
would be useless without
something to compare to.
So I
just found, hey,
we really need these reference values
and aren't and make any,
draw any conclusion about what's going on.
Now, the reference values
really needs to be from
representative populations.
And those are going to also need to draw
from all types of people,
including healthy and disease,
hospitalized and amulets R3 patients.
Okay, so to establish a reference value,
it's been determined that there are
some mandatory conditions in
order for it to be considered valid.
So first of all,
groups of reference individuals
should be clearly defined.
The patient that's
being examined should really
resemble the reference individuals in
our respects other than test aspects.
So for looking at a diabetic patient,
we want the reference individuals
to be
diabetic patients as well for
trying to figure out what's a normal range
or acceptable range or a certain value.
Now, the samples should be obtained
and process similarly for
the test of the patient
that is in question as
to how the information
was obtained for the reference population.
And it must be well documented so that we can
refer back to that and make sure that
indeed they were done in a similar fashion.
The analysis quantities must
also be the same.
And the results from standardized methods.
And the results must be obtained
from standardized methods.
And Paul will always
using sufficient quality control.
And then of course,
the known clinical sensitivity, specificity,
and prevalence in
the population should be included.
And that will be helpful in
the decision-making as well.
Okay, so who should be
included or excluded
from this reference population?
We often talk about
selection criteria for patients.
In terms of inclusion or exclusion criteria.
A random sample as desired where there's,
because otherwise there's potential for bias.
But the random sample
might come from a certain population.
So we can partition
by a number of different conditions.
Sex, sex is one we even
looked at in the last lecture.
We're see different
reference frequency distribution SUS
by gender for bilirubin and for
G2D ci GET reviewed,
revisit again in that chapter.
And it could be by blood type
or ethnicity or other criteria as well.
It's on table 5.1 in the textbook.
They give some examples
of partitioning criteria.
And that means not
that we're going to exclude it,
but that we're just going to
separate it and use it
for the appropriates patient comparison.
Now on the other hand, exclusion criteria,
which
includes some of these same criteria,
would be to remove them from the population,
reference population in order to get
a better view of
where we think
our patients value should fall.
So this could include pregnancy,
drug use, recent hospitalization,
transfusion, or any number of
exclusion criteria listed here in table 5.1.
Now, an important consideration
when we are looking at
that reference values compared to
the patient diagnostic test,
is that we want to be sure
that in addition to all
the things you just
mentioned that the pre analytical process
is also standardized.
We want to minimize any bias
or biological noise.
And so we want to make sure that
the individuals are prepared
for the test in the same way.
So for example, if we know we want to
eliminate the interference of
just sensible drug use,
then we ask them to
refrain
from doing so
for two days before the test.
It also matters often
how the collection itself is made.
So body posture will actually
influence some of the non-feasible analytes
such as serum element.
So you want to have
your patient sitting not lying down,
for example,
in order to ensure that you're
getting the similar results
from different patients.
And or even if you're
Jang intersubject record
within the same subject,
subject based comparisons than
you especially want to
make sure that
your collection procedures are the same.
And how we handle
two-sample prior to
analysis is also critical.
So. Once we're ready
to begin to tearing the reference values,
we need to keep in
mind that method of
analysis that we're going to use.
And we'll be sure that that
is standardize too
good an equipment.
The thing is the reagents that we're using,
the calibration of the equipment,
raw data at the calculations
we used to determine the value.
These are all important things to ensure.
We have them or the same in both.
The reference value that
are
obtained as well as the
diagnostic test for the patient.
And quality control, as
well as
really critical.
And we now know that the labs are
using some kind of quality control,
either or both internal and external,
comparing to other labs.
And internal, like if
you change
a reagent or new reagent,
you still getting the same results.
And so that is also related to reliability,
where we're looking at a statistical analysis
to show how reproducible and how
repeatable our measurements are
and who I ensure that these are
high-quality in order to
be confident and any
result that we're obtaining.
Ok, so once
we collect the data, then what?
Well, as I said earlier,
we may partition the data in order
to be
more representative of the distribution that
makes sense for the patient
or comparing the data too.
And we want to collect
a sufficient amount for the partition.
And if we can't combine
them, if it makes sense.
Because the more the higher the value,
the better your information as
I've represented the true population
as we learned in the last chapter.
So you can also inspect
the distribution visually at this point to
look to see if it's skewed or
normal or bimodal or if there's
any visual is visible visual key issues
to the reference limits
themselves are outliers.
So here's an example of, again,
we're back to the gamma gluten will
transferase as we're looking
at in the previous chapter.
But there's different frequency distribution.
So you can see is still not normal.
It's skewed to the upper concentrations
and there is a data point way up here.
So I think that's an outlier.
And then if so,
we want to be able to test to
see if it truly is or not.
So we see that
deviate significantly, but we'll,
but we have to be careful because and we
see that skewed this direction anyway.
So it could be a true value.
But if we use the Dixon read
range tests that the textbook introduces,
where we look at
the quotient between the difference of
the two highest values are
the two lowest values divided
by the total range.
And then looked to
see if this quotient as This ratio
is greater
than or less
than as certain credit off,
in this case 33%.
And enlist scenario.
We see that indeed when we make
this calculation where the difference
between these two values,
74 minus 50 is 24.
And put that over the spread of the data,
60 gives us 0.35.
Indeed, it is higher and
that would give us a reason to
reject this outlier and just treat
this as these sample population.
Ok, so once we determine
the reference limits, well, we,
sorry, so last way to turn
reference limits just
by removing the outliers,
we can't really calculate with
this interval is going to
be as shown here
and described in the
textbook
in clinical practice.
The observed patients Bay is going to be
compared down with this reference interval.
And so how do we calculate
the actual values that define the boundaries,
the bound of the
I'm not we're not going to
call it normal range,
but the health associated range.
So there are three ways
that they're outlined in
the textbook that you
could do tolerance interval,
prediction interval or
an inter percentile interval.
And it's noted that if
your sample size is greater than a 100,
then the difference
between
these three methods for
determining a reference interval
are really negligible.
And it turns out that the easiest one to
calculate is the inter percentile interval.
And we've already been looking at percentile,
so we're familiar with that.
And it's defined here as
a interval that's bounded by 2%,
test out the reference distribution.
So it is the most commonly used and it
is recommended by the FCC.
So the convention typically
is that the integral will be
bounded by the 2.5 and the 97.5 percentile.
And that will give you
the 95% central interval
that we have been talking
about in the previous chapter as well.
So if, if your distribution is highly skewed,
you might need to choose different values.
And then we're also not only
do we want to know the reference interval is,
but we also want to know the uncertainty
in those limits.
So we're going to look at
the 90% confidence interval
that the true percentile,
as in that integral,
has a lot of bounding things.
Intervals, ranges, okay?
So a
lot of the data,
as we discussed in the previous chapter,
a lot of biological diagnostic tests
will reveal a non-parametric distribution.
As we've seen with GCT myself,
really relevant and we'll see with
many other things. So it's not really.
So the nonparametric analysis
then is going to be
the preferred method for
figuring out the percentile cutoffs.
Method to do this
is to scan in a textbook and summarize
here you're first going to sort all of
your data and references
in ascending order and rank them.
And when we want to
determine the rank numbers
associated with the percentile cutoff.
So you can use this formula here,
where the rank number is going to equal the
percentile times and
the
sample size plus one.
So in the example given in the textbook,
n is equal to a 123 unless case.
And now we're going to determine
the 90% confidence interval
on that reference interval,
as I'll show you in the next slide.
But first, we look at figure 5.3,
where I am sorry,
this image is a little dark.
Where we see the coffin reference interval
highlighted in blue.
But let's just see how
it's been calculated here.
So table 5.3 B
shows the calculations where we have,
as I said, you can
use the formula that
was on the previous slide.
So we want the 2.5% interval brink limit,
and that's going to be
2.5% is 0.025 times n plus one.
In this case, I don't think
because what that gives us 3.1,
which is closest to the rank of three.
Similarly for the upper bounds,
97.5, it's going to be 0.97
factor 123 plus one.
Again, we get a 121 basically for the rank.
So when we go back to this table here,
we see rank or three corresponds to G,
G T value of seven.
And rank ordered 121 corresponds to G,
GET 47, as shown here.
And then we can use a table 5.2
that shows us the
nonparametric confidence intervals
of the reference limits.
And now we're going to look at
the uncertainty in these,
in these numbers that we just calculated.
So for a sample size of
123 that falls into
this first line of the table,
we have rank numbers of
17 for the 90% confidence interval.
So if we plug these
into the formula as shown here.
First of all, we just go to the table for
the lower limit and say,
we're going to see the lower ranked
number being one and upper being seven.
And those foot over
here correspond to rank order
of number one has six GGG and seven is HAT.
So that's where these guys come
68 units per liter.
To get the upper one, we're gonna
do a 100 twice.
We're going to do a 123 plus one
minus the upper value and
minus lower value in order to get
the confidence interval on
the upper reference limit.
So those correspond to a hundred
and seventeen hundred and twenty three.
Go back overture or Taylor
and I'm saying teen is
39 and 123 is 50.
So our lower reference limit
is going to be seven,
bounded by six to eight.
And epilepsy can be afforded by
39 to 50 microliters or units per liter.
If you do the parametric data's however it,
there's more statistics are easier to run.
And so then you
would just look at
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
the interval
being two standard deviations from the mean.
And they do give an example
in a textbook.
A textbook I'm not gonna go into details on.
I just want to point out
that for the parametric,
it's really highlighted the importance
of testing for goodness of fit.
So you want to be sure that
the cumulative frequency when plotted
on Gaussian probability paper is linear.
And art, be sure that you can use
the parametric statistical analysis.
So if you find
that it's not as in the graph here,
near linear fit, doesn't seem that linear.
There are some transformation
that will allow you to get
marmot linear fit and must be able to
use your parametric statistical analysis.
So for example, sometimes
if
it looks as such,
we might try either Jin,
the log y equals log of
the concentration
or square root of the concentration.
And in this case they show same data
plotted as cumulative frequency
versus log of GET.
And now we do see Marvel linear relationship.
So table 5.4 shows
this just to show you that if He is
the non-parametric method to
determine the reference intervals
for the same data set.
Three different methods. Ok, so
the nonparametric gives us
a lower limit of seven,
upper limit of 47 with
confidence intervals Sean and brackets.
And we see there's one value below
the lower limit and two values above
the upper limit for
the parametric untransformed.
So that will be this plot up here.
We get a lower limit of 0,
but it's bounded by
a negative number and an upper limit of 36.
But here we see that
there's no values below the lower element,
but there's seven values
above the upper limit.
And but on the other hand,
if we use the parametric transfer form data
as for according to this grabbed here,
then we see the limits
are more in agreement with the nonparametric.
We have one value below and six above.
Now. So even though we have used
a system is linear and still appears that
the best approach for
this data set would be the nonparametric.
Where are, where we're
not so skewed to lower,
to the lower end of the concentration values.
Okay? So how do
we use these reference values?
Well, as we've been talking
about this lecture and the previous one,
we're going to interpret
our medical laboratory data
comparing
patient's values with the reference value.
And as we've also seen before,
flag it as high or low
if it's outside of the interval.
So this is just another,
for example, a test from a patient.
And here we see muster the values are falling
within that range except orange.
But glucose here is shown to
be low and so it's been
flags and that allows
the doctor to follow up on that.
Necessary. Okay. So note
about subject base versus population based.
We need to be aware that a change
in
an individual
that may still be
within a reference interval,
this thought to be
associated with health may
not be normal for that patient.
So figure 5.3 shows us,
let's say here this patient a,
and you get a measurement
for whatever we're measuring,
let's say Juju t
of b,
that would fall within the normal range
where it wouldn't raise a flag.
However, if this is your
normal than it is
outside of the normal range,
is indicative of something important.
And so we want to be sure that we're
keeping an eye on these data as well.
If
we have access to a comparison to
the subject base reference values.
And this, another example
here shown on the right,
looking at iGEM and with as a function of,
of time basically of when they've been
monitored for different patients.
So this just shows that the intra
individual versus
the inter-individual
variability is quite different.
So this patient
is going to
be well outside the range of the others,
but their value within
one comparing
themselves doesn't vary very much.
So it's important to be
aware of these things.
Transferability and other key concepts
where we need to know
are these populations really comparable?
Or they're standardized protocols from
specimen collection to
analytical purpose that
are being carried out as
their common calibration that
we can use to be sure
that we are actually
able to compare the results from labs.
And is there an external
body control and affect?
So sometimes
we'll actually use reference value
so that we haven't determined
ourselves back probably quite often.
And
these could come
from manufacturers inserts.
So sometimes the company
who makes the tests will provide
the reference value to could've
come from peer reviewed publications.
Scientists have been studying this
and found a good value,
then they will
use
reference range that they will use.
And it could come from multicenter.
Trials so that
you are
really increasing your population
and correcting for any variability
from lab to lab.
To verify the verification of the transfer.
Of course, there is the CLSI,
who's the governing body we mentioned before,
where they can actually
provide the internal calibration.
But they also give guidances on how to do so.
For example, at the minimum
of 20 reference value measurements
and less than two that fall outside of
the range that you're looking at.
Okay? And then we have another note
about the selection of the reference values.
So labs are really responsible and do
take the lead on providing
reference limits for the tests.
But and is
multi disciplinary group that would decide
including clinicians,
Anne's clinical laboratorians.
So it's important to
understand these so that you can actually
weigh in on the conversations
about what reference values
make sense in your experience.
Okay, so this page
really should look familiar because
we're just looking at the sensitivity
and specificity again,
where we know we have our true positives,
true negatives, false negatives,
false positives for patients with
disease without the disease, et cetera.
And the formulas are
just shown in the table
here where they
weren't before for sensitivity specificity
and then calculates the total negatives,
positives, and gives the predicted value
we did tackle or the predictive value before.
But let me just remind you that this
is basically if you get a positive result,
what is the probability that
the patient is actually positive?
So that's predictive value for
positive test, semi negative tests.
Now, concept we didn't really
talk about in the last
chapter is prevalence.
Which just means how much of
the population actually would be
expected to have the disease.
Because this, it turns that
does affect that predictive value.
And really that's important
for knowing how valuable a test really is.
So these two
figures show same disease,
but with different prevalence.
So let's say the prevalence is 50%.
And we're told that our instrument and has
a sensitivity of 95% and
a specificity of
90%. And that's pretty good.
So we find that we will,
our predictive value for
patients who test positive
to actually be positive is 90%.
And F for patients who test
negative and RNA good,
that predicted value is 95%.
So we're doing a pretty good job
predicting here out once
the disease prevalence goes down to only
5%. That's really changes.
We still have sensitivity 95% and
a specificity of 90% But now,
when we put a plug in the numbers,
we get a predicted value
for the positive as much reduced.
Actually, they wrote 5% here,
but I think that was
just an air of copy error.
The actual value, if you calculate yourself,
you'll find is 33%.
And that's also in the figure caption.
But regardless,
33% and still much lower than the 9% here.
And the negative prediction adsorption,
which is nice, but often we
want this number to be higher to.
Okay? So I'm just going to
show here a couple of examples of
how this is important in
our currents fight against the coronavirus.
So here is a graph from
the literature from this paper shown here
that is highlighting are
showing out the cut-off is being chosen
and trying to
determine
the probability of having
a false positive or false negative tests
for the sparse
code e2.
So, so this graph
is discussing the problem
with false negative tests.
So if we have PCR tests for the coronavirus,
that's not very sensitive than we are
likely to get more false negatives.
And but this is actually graphing AMS,
the chance of infection
if we have a negative result when
visiting somebody else versus
the chance of infection.
If
you haven't had a test and you don't
know if you could be an asymptomatic carrier.
So if
you have the ability to get
a very accurate test and any year,
more confident and
you're
negative test result.
It means that you
have more of an opportunity to spend time
with people who might be worry about
affecting before you have
to be concerned that you might,
in fact, while I was confusing, but anyways,
this is just an example of
how these statistics are being used
currently right now to
try and better understand
how important
social distancing as for example,
and how effective it is and
how important it is to
have really accurate testing.
Now, you also probably hearing a lot about
the antibody test and
it's the news allowed as well.
So here's just an example of
one of the commercial assays
that are out there today,
the avid IgG essay.
And it has been shown
to
have a sensitivity of a 100%,
great specificity of 99%.
And then it gives
you the confidence intervals,
just like we've been talking
about on the previous slides.
And it also gives you the predictive value,
positive and negative predictive value.
But it has to make
the assumption of a prevalence of 5%.
We don't actually know
what the prevalence is.
And you learn the last science slide,
that it's really important to
have no the prevalence in order to
know how accurate this essay would be in
predicting a positive result if
predicting an actual positive presence
of antibodies given a positive result.
So just some real-world examples
of things we've been
talking about in this chapter.
As we continue to go
through in the last slide for this chapter,
just highlights one of the things that the,
the textbook says is important to
keep in mind and this is true.
So when my normal actually
appear to be abnormal and vice versa.
What we looked at that example of
the subject
versus population-based reference values.
So this is basically reiterating that,
but with some real examples.
So if you happen to have
low serum albumin measurement,
Oh, well maybe not measured,
but you just happened to be
a person with low stream element.
Because of the connection between
abdomen and
calcium,
you might get a normal looking calcium value
from your a blood test.
But it would actually be for you
pathologically high because you
should also have a low calcium
if you have low serum album.
So it's important to know these associations
in order to be able to really
understand what value means
and how we can compare it
to the reference population.
Similarly, if you have had a press tick,
tick to me prostatectomy,
then you would be expected to have
an abnormally low concentration
of
PSA, prostate specific antigen.
But we need to know that when
we're looking at our patient and
comparing them to the right reference values.
So conclusion,
just a11 reference limits studies are done
while we have to remember
that there are dependencies
that can render them today
misleading to keep that in mind.
And that's the end of what
we're going to discuss for chapter five.
Lecture 4 Transcript (Ch. 6)
Okay, welcome to this lecture on chapter six,
specimen collection,
processing and
other pre analytical variables.
As we've talked about
in the previous chapter,
proper handling of specimens is really
critical to ensure that
we have meaningful data
that can be comparable
to the reference values.
So there are aspects
of how we handle our specimens
that may cause errors otherwise.
So to minimize these errors,
we want to be careful and
standardize the way that
a sample is collected.
Make sure that it's being properly identify,
the way it's been process,
the way it's being stored, as well as
the way
it's been transported.
All of these things will
affect the quality of
the specimen before it is analyzed.
So it should also be noted that
it will be specimen type dependence.
So what kind of specimens are
we looking at anyways?
Wow, you know, we're
generally talking about bled so
far and we jack about bodily fluids.
So I'll just list them
here as the
textbook does.
We're going to be looking at
Paul blood or
just a serum or just the plasma,
urine, feces, saliva, other bodily fluids,
vinyl synovial, amniotic,
pleural and pericardial acidic,
and some solid samples including
cell tissue and either solid tissue biopsies.
So I find sometimes there's videos on YouTube
that highlight some of
the important information and a voice
that's not mine and might
be of interest to you.
And these guys make some great videos.
So I'm Dr. Mike,
explain to you what's in
blood because this is going to be
important for all the chapters going forward.
Hey everybody,
Dr. Mark,
here in this video we're going to take
a look at blood and
the components of blood
in something called hematocrit.
First thing is, let's go through
some blood facts or suck blood.
What does the parent your blood?
Remember, perishes the concentration
of hydrogen ions.
And we measured as pH and it's between 7.357.
Full thoughts really pull is
the pH of our blood goes
outside of this range,
things can start to go bad.
White. How
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
much of
our body white is actually?
Well, it's a rant about 8, 8% percent
of our body weight is blood.
From miles.
We could tell her at about five
to six liters of blood.
And for females it's around
about four to five liters of blood.
And the temperature blood in our body is
about 38 degrees and
blood is the primary way
of shifting hate around the body.
If it gets too hot, then
the blood vessels dilate.
We released hate. If it's
too cold, blood vessels constrict.
We hold on to that hate.
And what type of tissue is blood?
Remember this full tissues of the body?
Nervous epithelial, connective and muscle.
Blood is connective tissue.
Remember, connective tissue is
a whole bunch of bonds and.
And so bargain is connective tissue.
You've got cartilages connective tissue,
and you've got blood is connective tissue.
And connective tissues are made up of
cells, gels, and fibers.
And it's the concentration
of fibers in the types of gels
that depend on the viscosity or hardness of
the connective tissue is obviously
few farmers in blood
and therefore it's a liquid.
So what I've drawn appears a chub.
I've taken my bladder, popped it in the tube.
I put this tube in a centrifuge,
and I've spun that centrifuge around
and what that does is overtime.
It separates out the components,
the major components of blood
according to weight and size and what we get.
And
the biggest and heaviest things
down the bottom.
And we get the largest and
smallest things up the top.
Alright,
there's three layers we
need to talk about, right?
123. So the first layer is
the layer at the top
that we're going to focus
on to begin with.
And that is what we call blood plasma.
And the blood plasma consists basically it's
55% of
your entire blood volume is blood plasma.
And blood pleasure is made up of
three main things that you should know.
It's made up of water,
proteins and solutes.
So out of this 55%,
you're going to find that 92% of
plasma is water sucked.
If most of the blood is plasma,
Most of the plasma is water.
Most of our blood is water.
Alright? Proteins make up a random at
7% and you're going to find that
solutes is less than 1%.
Now, with the proteins has
three main types of proteins
in blood that you should
either somebody saying well,
but this three mind that you
should be aware of.
So these three proteins, albumen.
Let's first focus on
album and then I'll talk about the.
So album in is an important,
most abundant protein in our blood.
It does a couple of things rot.
So one, its carry out will transport protein.
And what it usually carries around
are substances that are lipid soluble.
So if it is lipid
soluble, it doesn't like water.
Most of the blood is water, so
it doesn't want to be in the water,
but it still needs to be transported.
So bonds to albumin and
lipid soluble substances, rock.
Lipid soluble.
They convey lipid soluble drugs for example.
Or they could be lipid soluble hormones.
And the lipid soluble hormones
include things like steroid hormones, right?
The second topic prior to,
and you
should know,
the globulin and the global ones
play a big role in
immune function and clotting.
Let's just run immune. All One thing
I forgot about the albumen because I said
one is obviously another point.
He for element is that it is
the most important protein when it comes
to maintaining osmotic pressure.
What's osmotic pressure? So remember,
osmosis is the movement of water
towards an era of high solute concentration.
So if you think of a
blood vessel, for example,
a capillary, capillaries have
holes in them and they fade.
The tissues outside of
that blood vessel is
some cells that need to be fed.
Then it's the bay oxygen
and nutrients, for example,
that need to come out to fade those tissues,
but at the same time what it comes out.
So throughout the day your capillaries
are constantly leaking fluid.
Now, over an entire day,
if none of that flowed was reclaimed,
would lose most of our blood volume for
May five to six liters
worth of blood volume
just in the periphery or in
the interstitium between
the blood and
the tissues.
That's where my fluid will go on.
Blood pressure will go
down when bigots situation.
So I need a y of reclaiming
that fluid back in.
And the major why I'm doing this
is albumin.
Protein that sits in the blood.
Protein has a negative charge.
It loves pulling water towards it.
And that's called maintaining
osmotic pressure album.
And just like globules rotten,
just like fibrinogen,
which is going to be the third prototype.
All made in the liver. So the liver
isn't doing too well.
You might not produce enough
album and you
may not
maintain osmotic pressure and fluid.
My remind late debt and this
is edema or I'm sorry,
really important clinical link there.
So he said globules are important
with immune function,
but there are also important in cloning.
And then the third protein is fibrinogen.
And fibrinogen is an inactive protein that
needs to be activated into fibrillin rot.
And it's involved including
as well in the clotting cascade.
Perfect. But what about Soviets?
Will top of Soviet still we have the Soviets
are going to be things like ions.
And ions are sodium, potassium,
magnesium,
calcium chloride, things like that.
Nutrients. There's nutrients may
be glucose or amino acids,
fatty acids, for example.
Gases. Gases
like oxygen
and carbon dioxide and nitrogen. And waste.
Uric acid or ammonia for example.
Weiss metabolic wastes
is what we're referring to.
So you're gonna find that
plasma makes up the most
of in Thai blood
fit 5% water proteins solids,
and these are the components of that.
Next part is this part here.
So this part here is
the smallest component of
our blood called the buffy coat.
It is this white buffy layout.
If you put it in a centrifuge,
it makes up less than 1%
of the entire blood volume,
and it is made up of
leukocytes and thrombocytes,
which are white blood cells and platelets.
So leukocytes, the student leukocytes first.
So for the leukocytes is around
about 10 thousand cells per mil, right?
And Lucas science, like I said,
I'll also known as white blood cells.
Lu Kai means one sought main cell phone,
different types of leukocytes.
Remember, the mnemonic,
never let monkeys eat bananas,
never let monkeys ate
bananas. There's your mnemonic.
Neutrophils. Lymphocytes. Mano science,
eosinophils, basophils.
And it's also guys in abundance,
most abundant to laced
abundance.
So most of our blood cells
are neutrophils really important
in what's it called
when you get
damage, vascularized tissue, inflammation.
So that's Lucas on to
the other one was
thrombocytes, which are platelets.
That most cell fragments then cells
themselves from mega carriers thoughts.
But like I said, platelets.
And you have a rant about 300 thousand cells
per mil, right?
So there's more platelets
in number than there are white blood cells.
And platelets play
really important rolling clotting.
So these leukocytes,
white blood cells are
therefore immune function, right?
So you get TE base cells and
all these other cells that
have really important nuclei as well.
So we'll talk about that in a future video.
And platelets which are involved
in the clotting cascade.
All
right, the last one in the bottom
is eosinophils,
Saris or erythrocytes.
What am I talking about?
Erythrocytes, which means red cells.
Lucas Antoine cells, erythrocytes, red cells.
So the ABI sees
red blood cells and weave around
about 5 million per minute,
one of the most abundant cells
in the entire body,
and what they do is they carry gases, right?
They carry oxygen,
carbon dioxide, really important.
Red blood cells are filled with
hemoglobin that carry oxygen.
Okay, so when we
take blood and we spin it
down and we measured this,
the percentage of this thumbnails
is around about 44%.
For women, the red blood
cell percentages around about 40%.
And this is also called El hematocrit, right?
So measuring have Maddie crit is simply
measuring the red blood cell
percentage of
whole blood miles around
about 44% females are
NBA 40% plus or minus a number of percentage.
Now here's the thing. The reason why
we do this is if it goes too low,
it might be an indication of anemia.
Not enough red blood cells.
If it goes too high,
maybe an indication of poly soothe female.
And these will be
the topics of future videos.
So as we look at the hum,
adequate or blood components,
three major types.
Plasma effect of 5%,
Buffy count less than 1%,
and erythrocytes are at about 44%.
So hopefully that helps
looking at blood components.
Okay. I hope you
enjoyed listening to him breaking
down what's in our blood
as much as I found it useful.
And they have a lot of other videos that you
might find helpful as we
go through the course material.
Obviously, definitely more well
versed in the anatomy and
physiology parts of what
is important to clinical chemistry.
Next slide. Okay, so plug collection
obviously is an important mass spec
them and as you may know, it's.
Verges phlebotomy.
One of the
most unusual words in this course,
I think phlebotomist has such a funny thing,
but it's a
funny name for
a very important job.
Ok, So usually it's
venous blood that's going to
be
used for all of
these tests that we're interested in.
And here are some of the preliminary steps
that one must take
when I'm about to do of any puncture.
And this is of course,
Guy guided by the governing regulatory body,
the CLSI laser standards.
So make sure that
you are performing an act of identification.
You confirm whether or
not they relate to excite constraint to
make sure that you have consent and
share the appropriate PPE.
Personal protection equipment.
Make sure that, you know if
your patient has to be seated or supine.
Make sure you know the volume
needed and we want to minimize the amount
of blood that is taken
as like the number and types of tunes.
We'll get to that in the second half.
The right needle type location,
the vein I mean and site
prep cleaning with alcohol
and dragged and timing.
Three during the day or after fasting, pain,
inclusion and order of
job or multiple specimens.
So that's when you multiple tubes which when
you now need to
do first you make sure you need to know.
It's, you know, we're trying
to use the oppressed.
Okay. And that sometimes or
the venous blood will be collected using
a skin puncture, especially in infants.
Instead
of inserting a needle in the vein,
that finger prick or
perhaps on the foot as well.
Now, sometimes you do need blood
drawn from an artery
and then arterial puncture,
then this also has
bunch guided standards associated with it.
But this is a little
more complicated is apparently.
And so it's typically performed by
a physician and not the phlebotomist.
Here's just a list of regulatory documents to
give you an idea of how
tightly regulated these procedures are.
You don't need to know these, I would say,
but just to give you an idea
that these are things you can
look up or maybe isn't diabetes,
but I'm telling you don't need
to know all these things.
And regardless, your exams
are going to be open book
is they must be in an online environment.
So I mentioned having
the right 2b and the right bloodshot order.
So table 6.2 just
highlights
what's going on with
the different colored caps
on the blood collection tubes,
which you may have noticed
when you had your bread drawn.
Here is a nice guide
to tell you what
each
of the different colors means.
So just give some examples
of what might be in the tube along with,
well, let me go back for a second
and just point that out.
So for example, if you have
a yellow star for color.
This is just a sterile tube with
no sterile media is
added for the blood culture.
Wrote blue no added if Claire and
of let's say one when that does happen.
Okay. Green cap here we have
heparin tube with or without
gel or a lavender Ryan,
you have ET EDTA in the tubes.
And so now yellow one e of the ACD
for molecular studies in
cell culture or
grade for glycolytic inhibitor.
So moving on to the slide again.
The theorem, which is usually the specimen,
specimen and choice is going to be
the plasma without the proteins
involved in blood clotting.
Because basically
what you
do is you let it clot
and then you rid of the clot theorem.
Without protein-
protein.
Plasma is the non-cellular components
of anticoagulated blood.
So and heparin pepper and is used
to coagulate the blood.
So it's a, it's
a or to not as an anti-
coagulant.
So you don't have to wait for clotting,
but it can be used as well.
So the most
some
of the anti-coagulants and preservatives
that you would find them and
these blood collection tubes
are listed here and
describe more on that text.
Heparin is the most widely used one,
but it's not suitable for
PCR studies because a inhibit
eight inhibits the polymerase EDTA,
which is ethylene thiamine
tetra acetic acid as
alkylating agent that is not
suitable if you're going to be doing
measurements for calcium, magnesium or iron.
And sodium fluoride is a weak,
weak anticoagulant preservative inhibits
enzymes involved in glycolysis.
Seems to me a spelling mistake
there for just a spacing,
say citrate coagulation studies as
acid citrate dextrose is used to
enhance the vitality and
work every white blood cells,
if that's what you're trying to investigate.
And oscillates inhibit several enzymes
and I DO and acetate.
Acetate is an antique glycolytic agent
which inhibits creatine kinase.
So you can see depending
on the study you want to do,
you're going to pick the right to.
They're all made with the additive
already present and in the quantity required.
But that means that it's
important to fill the tube
to the level that is recommend.
Typically I felt too and
just say it's important
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
to
keep the samples in there.
Designate primary container
never transferred.
I mentioned OK. A urine.
They describe as well one of the
other very most prevalent.
Specimens being studied.
And here timing is important.
So that claim early morning
fast to specimen as
the most concentrated stream considerations
come into effect for different studies.
So the first ten mils
is good if you're trying to look
for is urethra thetas.
So bacteria of the urethra,
bacterial infections or midstream,
if you I don't look for bladder disorders,
drug and alcohol testing,
and soft and down with urine and am
very there's very stringent regulations
on how the specimens
collected to ensure there's
no tampering with or ability
for someone to change
what's going into the specimen.
It's important to mix before
transfer to keep it at
a cooler temperature and often.
Yeah. And additionally, keeping it called,
it might be mixed with a mild base such as
dilute sodium hydroxide as a preservative.
Okay. So that was just surgery.
Review of all the different specimen types
which may remember and then they
added in a few others here.
For some details on
the solid type tissues including
CVS where you're looking for chromosomal,
our genetic information on the baby,
and Foucault cells for
genomic DNA or
malignant tissue and toxicology.
Couple more notes in the chapter
on collection, storage, transport.
Make sure that you have properly
identified and correct container.
Keep the coal have indicated now
the plasma and serum or
separated from cells by centrifuging.
And they do that under refrigeration.
And then there have to be separated
within two hours had caught collection.
Hemolysis is the breakdown of
red blood cells and can
occur if the tubes are fully filled.
Various laws and regulations apply
to shipment of biological samples,
as you would imagine.
And there are lots of protocols and
regulations that must be
followed for each step,
from collection
to the storage to analysis.
Ok, so just to leave you with
the question of specimen collection.
Oops. Sorry about that.
Yeah. So just a question about
a specimen collection which
was of interests me
because this actually happens me,
I did one of the people
who has had a coronavirus in this town.
And I actually ended
up getting two tests
on the day that I was not
feeling well because I was so sure I
had strep throat that
the first place I went to
didn't have a rapid strep test
and so I went to a second location.
And it turns out that at
the first location they did
a nasal swab that was in
the knower lower nasal chamber.
And the second place did
the nasal pharyngeal swab that goes
right up to that backend knows where
people say it feels like
they're touching your
brain.
And in my case,
the nasal swab came back negative and
the nasal pharyngeal swap came back positive.
So it does matter
how the specimens collected
and it is an area of research.
And this isn't, this is
just an interesting example of
why we need to be so careful
about how specimens are collected.
Anyway, here is a table from a paper,
from this paper shine
their results where they were comparing the,
the two types
I just
described for looking answers,
Coby 2J, two section.
And they found that for the most part,
the results were concordant
and you get the same results,
positive and positive and negative.
However, they did show also
discordant results in 9% of
the samples that they looked at.
And that was my experience too.
So anyways, I just conclude
with that to let you know that
the way we collect specimens
is important and we're
going to keep that in mind.
Okay? And that is the end
of week one lectures.
So now you can go on to look
at the discussion board posts for this week.
That's the assignment, as well as the quiz.
And please feel free to
post any questions you have on
the discussion board as well for either
your classmates or myself to try and
answer. Okay. Thank you.
Lecture 5 Part 1
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Welcome back to our second week of lectures,
where we're going to delve into
the analytical techniques and
instrumentation that are most commonly
used in clinical chemistry.
So starting with chapter nine,
we're going to look at optical techniques
and spectrophotometry.
So first of all,
we need to think about what light actually
is and how it interacts with samples,
and what kind of information we can get out
of how light interacts with samples.
So first of all,
let's remind ourselves of
our first-year physics.
What is light actually
it's electromagnetic radiation.
It's a 3D radiant energy.
And typically the instruments
that we're looking at in a clinical lab,
we'll be making use of ultraviolet,
visible and infrared range of
light and shown here.
And the visible light of course,
has been expanded to show
you from blue to red,
basically 400 nanometers to 750 neighbors.
So we know light as
being characterized by a wavelength.
But we also have to remember
it's got this dual nature.
So it also can be
thought of as behaving as a particle.
Or we talk about the photon
being a particle of light,
having a certain energy given
by E equals h nu.
Nu is the frequency.
And frequency of course is,
is related to the wavelength
by c over wavelength.
Where c is the speed of light,
of course, an H. And
these equations is Planck's constant.
To this day that array enters.
The point being we gotta make sure we keep in
mind that we have
two ways of thinking about light,
either in terms of its wavelength or in
terms of its
energy per photon.
So now we're going to let light
interact with our sample and we can
look at what comes
out the other side and a number
of ways shown here.
And kind of your textbook, the first
being the light that's emitted.
So that's our incident light.
I not
stimulate some type of fluorescence,
for example, then we
can look at the light that's
emitted by the fluorophores in the sample.
Secondly, we could be
looking at transmitted light,
how much light actually
passes through the sample.
And more commonly we'll,
we'll measure that in terms of absorber at.
So if we have a certain amount
of incident light and
only a small fraction comes out,
then we know our samples
absorbing
quite a lot of that light.
We also look at scattering,
particularly if we have
larger particles in solution.
And, or we could
be looking at reflected light.
And we'll see some examples of
these in the lecture.
So certainly this is
always in the way I just did live
events button.
There we go. Okay. So measurements, options,
as I just pointed out,
basically absorption or
emission are scattering.
And here and table 9.1,
I show some of the optical methods
that correspond to each of those.
So we're looking at adsorption.
We'd be talking about photometry,
spectrophotometry, atomic adsorption,
Fourier Transform, Infrared
Spectroscopy, reflectance,
emission types of emission,
optical techniques that we'll
be looking at and
glib Flowering Tree polarimetry.
Like like
for fluorescence correlation spectroscopy,
we can look at polarization
of the light coming out,
changes in polarization or time resolved,
as well as cytometry,
where we're looking at emission of light
from tagged cells and aluminum entry.
We're going to go through these so
you'll learn a bit more about them.
And again, scattering, we look at
Neff laboratory and turbidity,
which are both types of
measurements of scattering,
will describe later and cytometry
again comes into play here.
So it should be noted that spectrophotometry
uses basically a prism or
grading to select the wavelength
that will be incident on the sample.
Whereas photometry is
distinguished from that by
just the fact that
the wavelength is selected.
Not more as range
of wavelengths using a filter.
Okay, so let's start
with observed absorbance then,
and then understanding observance wafers,
transmittance, or at least
let's understand what
the relationship is between them.
So we have a certain amount of light
incident on our sample.
There's going to be a
certain amount that comes out
the other side that is different
from the satellite.
Some of it's going to be reflected
and sound is going to be scattered,
of course, not only absorbed.
So we have to use a reference typically in
these instruments where
everything else is the same way,
the same sample, cuvette, in this case,
they're the same background solution
and our solvent.
And then the only
difference being in this sample we
have the analyte
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
that
we're interested in probing.
Okay, so absorbance is given
by this expression here,
where it'll be the minus
the log of transmittance.
And taping. In this case,
if we're dang about comparing it
to the reference where you
can actually use log to math to solve this.
But you can show that the absorbance then
this is minus the log of t,
where t equals the light
coming out of the sample solution,
compare to the light coming out
of the reference solution.
And beers loss probably when you're also
familiar with if you've done
some chemistry in the past,
where the observance can
also be given as a measure
of.
Here in the text is shown
as a times B times C,
where a is a constant,
b is the path length of the key that
typically one centimeter and C is
the concentration of your analyte,
what you're trying to probe.
So, but often we talk about epsilon
as the observed
tivity, co,
constant or coefficient because
that is what a becomes.
If b is one centimeter,
path length is
one centimeter and we're talking
about concentration in moles per liter.
Observance itself has no units
and that's important to keep in mind as well.
So the application of Beer's Law.
And in all of these techniques
we really want to be looking at
are analyte in the range where we
have a linear response
as we talked about last week.
So the linear relationship will
exist for observer and
absorbents according to bears law,
up to a certain concentration.
So, and it also requires
the following criteria that
the incident light is monochromatic.
So single, lately,
the solvent absorption is minimal,
so there's not a lot of
absorption going into the solvent.
It's only being absorbed by,
primarily by the analyte.
We also want to make sure that
the solute concentration is
within the limits of
being able to look
at this linear relationship.
And we need to make sure that there's
no other optical interference
in the solution,
as well as no chemical reaction at crime.
That would change again,
the absorption properties.
So we'll see there's
a direct proportionality from
bears lab between absorbance
and concentration.
And this can be established
experimentally for a given instrument
instead of conditions such that we can
calculate a constant K four,
the analyte of interest.
And here we just show
this figure 9.2 from the chapter,
the absorbance versus concentration
trying the linear relationship.
Whereas transmittance is
the inverse log of course.
So we should note that the error is
actually going to be
the least one transmittance.
Or it's been determined that
when transmittance is around 37%.
So dilutions are performed in order to keep
that keep it in that range
which corresponds to
an absorbance between 0.10.7.
So what does the instrumentation look like?
Well, here I'm shining Figure
9.39.4 from the text showing as single beam.
And we'll beam setup for absorption,
where we're always going to have
some type of Light Source.
And, and we'll have some type of micrometer.
And we'll have our queue
that we're the sample will
be usually have some slits, two.
Control the bandwidth of
the light and a detector.
So, and then some kind of read
up device is written as meter here.
Some sort of strange. But anyway,
that's what they're referring to.
So I'm going to go into
a little bit more detail on these.
So here we have the
first components being the light source.
So for absorption methods,
typically using an
incandescent lamp that could
be
tungsten or courts or halogen or xenon.
So these incandescent lights or lamps give us
a broad spectrum of radiation which
is useful for absorption.
Why do you want to select
different wavelengths for different analytes?
There are also LED lights that are being
used where the p-n junction is,
is used to generate the light.
And these have the advantage of using
less power and having a longer lifetime.
They also provide a wide range
of wavelengths.
And then lasers, of course,
which have a very specific or,
or at a very specific wavelength and
are monochromatic and non divergent.
Thinking be useful when
you aren't very drawn.
And high powered light
too, stimulate your sample.
So it really, but of course,
they're not variable in wavelength,
so it really depends on your application,
but the majority of absorption
spectrophotometers or
are working with the incandescent lamps
and, and sometimes LEDs.
I guess I should just
point out down here
we have
some examples of regular
daylight spectrum look like.
And the halogen lamps can
be tungsten or quartz halogen
they have and they
do cover all this spectrum,
but
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
they are shifted more to
the infrared, to the ultraviolet.
And like intrapersonal daily.
Now
I have LED lights, worm and cool.
And twos's just shows fluorescent emission,
but of course a laser would be
just a single wavelength as well.
Okay? So next component, spectrum isolation.
So for using one of
these incandescent lamps than we
need to be able to
select the wavelengths that
we would like to use for
each absorber
or fluorophore or they were exciting.
So this can be done using filters which are
typically then excuse me,
pieces of colored glass.
And this will provide us with
a bandwidth of about 50 nanometers.
Where the bandwidth of course just refers to
the range of wavelengths
that are still included after
going through the filter.
So if you look at your peak
of your absorption spectrum,
the bandwidth will be the minimum to
the maximum wavelength of that occur.
So filters have,
can be of the following types.
You can have a narrow band pass or
a cutoff filter or
an interference type filter.
And these can have
bandwidths of
five to 15 nanometers, so little.
I'm more specific than
just the thin colored glass.
Here's
some colored glass filter shown over here.
Of course, prisms can
also be used where you have
the refraction of light
increasing with the decreasing name length.
And that allows you to split
up the light and then
pass it through your slipped to get
just the wavelength you're looking for.
And, or you could
be using a diffraction grating.
Or you have a reflective
in their reflective coded gross or,
or NAS shows at passing through.
But in any event,
you again bend the light and
then select the wavelengths that you'd like.
Your slits as can be extremely
accurate, has low scatter.
And in fact, most of the
UV vis and nearly all of
the infrared spectrophotometers
make
use of diffraction gratings now.
And these have bandwidths of less than even
0.5 nanometers and up to about 20 nanometers.
So and then of course,
often combined with these slits and
Lazarus up right here.
So now down here where I define
the spectral
bandwidth is basically
the width of the transmission curve
at half the peak maximum.
As the full width at half Vmax is,
you may have heard it stated before.
So as I
said before,
you're looking at the range of
wavelengths for your spectral curve.
But, but taken at the,
you're going to measure it at
the half the height,
the maximum height, the
light.
So hopefully that makes sense.
And if not, perhaps you
will send me a question
or look it up just full-width,
half Vmax spectral bandwidth.
And keep me posted. Okay, so some notes
about this selection of
wave length, length, and bandwidth.
First of all, the choice
of your micrometer is
going to be based on your analytical purpose.
What do you
need? Which light
range are you looking for?
And again, the narrower the bandwidth,
the more accuracy you can obtain.
But you're gonna be often
reducing the amount of light
as well, that's incident.
So you are, there will be a trade-off.
So you want
to
make sure that
the spectral bandwidth that you're
using does not exceed 10% of
the natural bandwidth of
the analyte that will
be observing or interact with the light.
And we
note that most
of the analysts are going to be looking at
in clinical chemistry have
a natural bandwidth that ranges
from about 40 to 200 or more members.
So the wavelength is normally chosen in
order to achieve your maximum absorbance.
Unless of course, you have to take into
some interference into account.
In which case you might choose to do
it slightly off the peak in order
to avoid I'm exciting
some interfering absorber that's in solution.
And there also be cases
where a certain wavelength
will have high amount of scattering.
So you also want to avoid that.
And you also want to avoid
measurements on the steep slope of
the curve in order to reduce
any error that you might be introducing.
Such as, for example, NADH.
Which is used, can be used in
the enzymatic assays for
looking at blood alcohol content.
Where the NADH molecule is able to
absorb light in the UV part
of the spectrum, 340 nanometers.
So when you do the enzyme enzymatic essay,
it will actually have
the oxidized form of
NADH or she's just NAD plus.
And that will be reduced during
the enzymatic reaction with the ethanol,
ethanol in the blood.
And that will go then
from this NADH molecule,
it does not
absorb light that range to NADH,
which just so you can see at the end,
the enzymatic reaction is
taking place and you can quantify
how much alcohol would have been in
the
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
blood by the amount
of light that's absorbed.
So because NADH has
a natural bandwidth
at
58 nanometers at
the wavelength that it absorbs,
you need to make sure that your bandwidth of
your incident light is less than 10% of that.
So that means your bandwidth
should be less than six
nanometers in order to
accurately Look at the junction
of NADH in the situation.
Okay, so some of
the other components of the spectra,
photometers that you could also see in
the figures 9.39.4 included a photo detector.
Now, what we have here
typically is something that will
convert the photons of
light,
two electrons, because electrons
are easier for us to count and,
and use in a meaningful way.
So typically it involves
a photomultiplier tube, PMT.
And this is quite rapid insensitive,
and that's typically what you find
in a UV spectrophotometer.
Photodiodes can also be
used or a photo diode array where you
actually will get down
to one to two nanometers of resolution.
And photodiodes worked well for
the 2521100 nanometer range
wishes the visible and infrared.
And photo diode arrays are good for having
really high resolution
for different wavelengths.
Now, once you've got your detector,
you also need a way to read that out
to the people who are using the instrument.
You can have so many digital readout and
software and most often
processors that will convert
what's being detected into
useful information does can also allow you to
store your black or
your reference information,
any calibration curves that you've done
and previously collected data.
So a readout system
also allow you to convert digital input into
some kind of concentration or
enzyme activity as is
relevant to what you're investigating.
Of course, you need a cuvette.
I've I've always known it as the spelling,
I think French spelling,
but I did grow up and study.
Chemistry in Canada.
But as often you
spelled this way as well, cuvette.
Anyway at some kind of
holder for your sample,
typically made an courts or
glass and sometimes plastic because of its,
because then it's disposable
and you don't have
to sterilize it between uses.
So fiber optics are not always used,
but they do allow for the light to be to be
contained so that it can travel in
a way that doesn't
have as much light being lost.
And that will allow you to minaret
chez miniaturize your instrument.
But the disadvantages are
that you might still have
some stray light being
emitted from a fiber optic.
And because of the refractive index changes,
you can see a loss of energy over time,
which is known as solarization.
Some notes, just last couple notes
about the analysis performance.
Observance of unknown compounds has to
be compared with some kind of
calibrator or a series of Kant operators.
And to do this,
nest and other governing bodies,
regulatory bodies will provide
standard reference materials known as SRM,
the standard reference materials for
calibration and verification of performance.
And this will come up
as or talk about other methods as well.
Because having
some standard reference material
that all the labs can be using to calibrate
is really important to
ensuring that no matter where
you have your test them again
to get an accurate result.
Okay, so here I'm showing you
an example of spectra
of
photometry where we
are measuring hemoglobin in,
but this is usually
done using the near infrared light.
And this is a diffraction grating to
split the light into a 120 wavelengths.
And then a blood gas analyzer that
will liberate the hemoglobin
from the red cells,
red blood cells, by
shredding them basically with
a high frequency ultrasonic beam.
Ok, so here you can see these aspects
here we've got our infrared light source.
We have a lens to focus it, I suppose.
Right? And then that's
going to be incident
on our sample.
It's got a fiber-optic cable,
which fails to doctrine on the last slide.
And then it's going to hit
the grating split of
delight and gave us a spectrum.
Now before the sample gets to
the light beam here we're
injecting our blood sample
asked Can I get blasted here by
the ultrasonic Walters source?
And then the hemoglobin
will travel down
here.
So what we see here is an absorption curve as
a function of wavelength
for the hemoglobin in the sample.
And you see it's not
a simple single absorption peak.
And that's because there's different types
of hemoglobin in our blood.
So this, if you go to this site here,
it'll show you talked about
a little morbid, basically.
This is showing you
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
the different spectrum
for the different types of
hemoglobin that you'll find in
your blood
from them and vaccine met,
met them and live in.
And if you add these spectra app,
you're going to get this type of signal here.
Now this is often used
to look at hemoglobin in blood.
And I'm just going to show you
this video here if it works.
So you have an idea of what it looks like
actually in a lab setting and
not just rely on a blood gas.
Results are important for
any modern hospital or
clinic that wants to improve
the quality of their overall patient care.
Speed and accuracy paramount
in any critical care setting.
So the easier, simpler point of
care testing can be that better.
Imagine if in just 70 seconds
you could support your diagnosis by
performing a complete blood
gas analysis and receive
reliable results using
only 17 microliters of blood.
Imagined that deaths were
radiometry renowned engineers than
the US and Europe did.
Using proven thick film sensor technology,
the re-imagined the blood gas analyzer,
how it looks, how it
works, how you operate it.
Every innovation was driven by
the challenging ambition
to
combine chemistry,
electron and mechanics and clever and simple
analyzer that will work in
the hands of all health care professionals.
What they have succeeded creating
is
a easy to use,
reliable and robust,
cost-efficient blood gas analyzer.
Blood gas analysis is
this sophisticated science
and there's no room
for compromises when
critical diagnostic decisions are required.
Every feature must be carefully tested and
modified to fulfill its full potential.
The ABL nine, as the product
of dedicated and tireless engineering.
A clever plug and play
solution designed to improve
patient care by simplifying
the daily operations
of health care professionals.
Abl nine analyzer.
Clever ways, simple.
Okay, so I just like to
show you every Wednesday I went
these instruments look like in the lab,
seem a better idea
of how they're going to be used
and how you might use
them in your future career tip.
So let's move on to just a couple
of additional types of
spectrophotometer setups that are being used.
Before. We move on
to the second half of
this chapter that I will data
in a second presentation
just so this doesn't go on for too long.
Okay, so we have reflectance as
another alternative that can be used.
And here we show
this might actually be done on,
for example, dried blood spots,
which are other dread films.
But dried blood spots are
often used for analysis.
So we have our our halogen light source
here and becoming incident on the blood spot.
And then the reflected light
is going to be measured.
And, and as a function of each wavelength.
The intensity of the reflected diffused light
as what we're measuring.
And it's going to be
non-linear with concentration.
Okay, so here we're looking at
the hematocrit and what they
did in this paper where
she can read more about it here
is looked at the reflectance spectrum
that they got and
quantified what that match for
the hematocrit F1 measure from a dry film,
dried blood spot and compare
that to the true.
I'm adequate measured by
the current standard of care.
And what I like about this paper,
if you do choose to look at it as it,
as it just gives you an idea of how some of
the concepts we've already talked about
are actually being used here.
So the checkbook calibration
and they talk about transforming
this nonlinear interaction into
linear calibration curve by
doing log-log transformation
to protect but before.
And they also talk by using
a Bland-Altman plot to compare the data,
which we have also talked about before.
So it just gives
you an example of how reflectance is used
and how all the concepts
we've been talking about are also
being used every day in
these types of studies that are trying
to come up with
better and less invasive ways of measuring,
for example, the hematocrit as shown here.
And the final type of
absorptions spectrophotometer that I'm
going to mention is this atomic absorption,
which is often used for elemental analysis.
In clinical chemistry, we want to
look at how much calcium or copper,
lead, magnesium, or zinc is in your sample.
Again, here we're going to choose a lamp
that's made of the same medal
as a substance being analyse.
And the metal that
we're looking at will absorb light
from the flame at
a specific and narrow bandwidth.
So we're going to measure,
again adsorption here.
And there's, there's also
an alternative way where
you don't have a flame.
You
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
have what's called a blameless type of
atomic absorption setup where
you use a heated
rod.
But, alright, I just say why
the takeaways as just that this is
another method that we use where
you can get very specific information.
And the setup shown over here shows you,
again, we have the same types of
components. We have are lamp.
We have our sample
down here that's going to get
flowed into the flame and will
cause an absorption of the light.
So there'll be, it'll
be being transmitted and then
will be not transmitted
when the sample comes into play,
we still are going to have some micrometer
and a detector and a readout system.
What to watch out for with
atomic absorption is if there
are really closely observing species or
perhaps molecular species in your solution,
there might be scattering or
background emission as well.
And there can be some non
spectral types had interference,
including the viscosity of this,
of the solution that you're using.
The surface tension and density.
And if there's any type of
interference with the
solute volatile civilization.
So when you're trying to
have your sample go up into the flame,
if it's complex thing was
something that prevents that from
happening,
that will also be an issue.
So here you can see an example of
the concentrations of metals
in number of patients
as a function of their age.
Okay, so this study was looking at
heavy metal exposure to
populations in Africa
by doing atomic absorption.
Urine. And you'll note this
is a paper from last year.
So these are, this is,
one
of the really interesting parts
about clinical chemistry is that they're
constantly coming out with
new ways and better ways to
analyze what's going on in
a population of people.
And here they also were trying to see if
they're your age effects
the types of metals
that you've been exposed to.
So you can definitely see
higher age people in
this study in Africa have much,
much higher levels of copper in
their urine than younger populations, both.
So this is just the beginning of their study.
Of course, more will come out from there.
Okay, so I'm just gonna take a break
right there and then I will
start a new presentation to
go into the second part
of this chapter on fluorometer tree.
Me right
back. Meet me there.
Lecture 5 Part 2
Okay, so here we are.
Continuing on with Chapter nine
or call this part two.
We're going to be discussing for almond tree.
So when a molecule absorbs light,
it is excited to out of the ground state.
And an excited state as
shown by this diagram over here,
which is Figure 9.6,
funerary texts and FLRW metric analysis.
Then we'll look at the
light that's emitted when
this molecule then relaxes
back down to the ground state.
So there will be
some relaxation through
the different operational levels
of the excited state.
And then the molecule can relax
either through quenching where
there's no light emitted or
through fluorescence.
There's also the possibility for
a crossover to triplet state when it's here.
And from there it can again be
quenched with no radiation
coming off or emit phosphorescence.
Shown here are the triplet state refers
to the state of the spin of the electron.
And so what we do know is the wavelength of
the emitted light as
longer than the wavelength at the excitation.
So you remember that E was equal
to h over lambda.
So higher energy is
shorter wavelength, right?
So fluorescence will have
longer wavelength and be
slightly less energy because
some of it's been lost and in other ways.
So that shift in wavelength
is known as the Stokes Shift.
And another thing that's important about
fluorescence is there to
note that there is some time
associated with this process,
and that's on the order of ten to
the minus eight to ten
to the minus seven seconds.
So this is definitely indiscernible amount
of time or thinking about
spectroscopy of various types.
And it can actually be used them
to do some time resolves.
Fluorescence measurements,
and terrors of measurements
give us the advantages,
eliminating background light.
And they can increase the signal to noise,
so giving a better sensitivity.
So fluorescence like observance has
a linear relationship with
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
the concentration.
Now f being the fluorescence intensity,
we have again some kind of constant,
in this case phi, which is here a ratio of
the quanta of light
emitted versus how
many had been absorbed.
We have the initial excitation intensity,
much like absorption.
And then we have it.
Well actually, no, not like
absorption as one of the differences.
Sorry. But we do have here,
you'll recognize this bears law
where we have the molar absorptive a,
we have the path length
or volume element as it's written here,
and then the concentration
in moles per liter.
So it will only give us really
a linear relationship for
dilute solutions where the absorbance
is less than 2%.
And this is a result
of the inner filter effect,
which we'll talk about a little
bit more on another slide.
Another thing to note is that fluorescence
is much more sensitive
than observance
because of
this factor of I naught.
And here that I mistakenly
said was like, okay,
so if you increase
the intensity of the incident
light and absorption,
it doesn't give you a difference,
doesn't increase the amount
of absorption that seem.
But if he embraced
the
incident light curve fluorescence,
you to get an increase
in the fluorescence signal.
So that allows this technique
to be much more sensitive, of course.
And it's going to be
measured in some kind of
relative intensity units.
So polarization is also
something that can be used in fluorescence.
Polarization just refers to
the fact that light being
composed of
both electrical and magnetic waves
that are at right angles to each other.
Means that it can also affect
how this light is absorbed.
If it's in plane or out of plane with
the with the electronic energy levels.
Ok. So fluorescence itself can be polarized.
And this equation here just shows
you how polarization is
related to the different components
of the fluorescence intensity.
So we're going to say I,
V being the vertical, fluorescence,
NIH being the horizontal,
where we have a difference of
them divided by the sum.
And this polarization measurement
can be used to quantify,
analyze
by changes in
the polarization following,
for example, immunologic reactions.
And we're going to talk more
about that in Chapter 15.
Here is a diagram of
a typical spectra flow fluorometer.
This is also in your textbook Figure 9.8
and smaller to absorption.
And we're
going to have a lot of the
same components RAM some kind
of excitation source here
that's written as X s. You're
going to have some kind of a kilometer
to filter the light bulb,
the excitation light, and
the fluorescence that comes
off before the detector.
In this case, we also have
amplifiers shown unknowns detector.
Which is shown here
and guess and recording display,
some kind of readout device.
So they are slightly differently
labels on all of these components,
but they're essentially saying gammas.
And I just will point out that part of
the reason for these types
of differences from chapter to chapter,
that is chapters written by a different group
of authors in the text.
Ok, so typically done at 90 degrees,
where you have 90-degree or
right angle detection in order
to minimize the background signal.
And or it can also be done with
this front surface surface method
as shown in figure 9.9 and your texts,
I don't show it here, but basically it's
just reflecting off of
the half of the front of
the cuvette or where
the sample is being told.
And the reason for using it in
the front surface orientation is that
you get at longer range of linear response.
Ok, major concerns in
fluorescence measurements are scattering.
Again the inner filter effect
and sample volume.
So in order to ensure that
measurements are being taken
from lab to lab
in accordance with each other.
There is performance verification protocols
that are established by
various regulating bodies, but Nest prevents.
Again, standard reference materials
in order to do the calibration
and verifications much as
we discuss for absorbance.
And there will be, for example, listed here,
one of the reference materials,
936 EI wishes a queen in Sulfate dehydrates.
There's SRM 1932 should
have foreseen for reference skills.
But this is just not
something that you need to remember,
but just to know that there
is going to be a list of
these standard reference materials
that are indexed and you
can look them up for whatever
it is that you need, your reference.
Standard form. Ok. And as I said,
fluorescence allows you to
measure things with a lot more sensitivity.
And it's often combined with
enzymatic methods are
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
amino assays.
And we'll see some examples of these as
we get to the course more.
Okay, a few other types of
fluorometer. Okay, sorry about that.
Irish realized. Keeps misspelled
parameter like that.
Okay. So some other types of
thermometers include ratio referencing,
where a ratio of
sample two references
measured simultaneously.
And this is a discrete excitation wavelength
and emission wavelength rather than spectrum.
This can be used for determining
concentrations are defined
wavelength and the advantages of this
in a single beam format as shown in,
let's say, does it showed here actually know.
Oh yeah. Figure 911,
chosen rate ratio referencing
spraying spectrophotometer.
In some of the advantages are that eliminates
short and long term
Xenon lamp energy fluctuations.
And the excitation spectra are also
corrected for wavelength
dependent energy fluctuations.
And as I mentioned already,
timers off or geometry
can be taken advantage of data,
the decay time associated with fluorescence.
And here the light source
is post its measure,
a measure this exponential decay ends.
And then even better if he can use
longer-lived fluorophores like iridium,
where it has a decay time of
seconds rather than nanoseconds.
So the detection limit here can
be down to ten to
the minus 13 moles per liter,
which is essentially sub picomolar range.
And that's four orders of magnitude more
sensitive compared to standard geometry.
And the iridium labeled
nanoparticles, for example,
have been used for the PSA test for
prostate game for prostate cancer.
Antigens. Down to 0.5
nanogram per liter sensitivity.
Flow cytometer frame.
We're going to talk about
it again in a few minutes.
Is another type of
instrument
that's use with fluorescence.
So a cytometer as
measuring cells passing through
the light source and
amount of fluorometer where they look
at single-channel
front surfaces dedicated to,
for example, zinc,
proto porphyrin, and whole blood.
Okay, so here's a diagram
of the flow cytometer.
S1 is not in your book,
but I just wanted to show you
an example of what this looks like.
So basically, you have
a sample and then it goes,
passes through a narrow opening
where the cells can be
basically probed one at a
time.
So you have laser light, light source,
and then you have a detector
for measuring fluorescence.
And you might also have
another second detector that allows
you to look at scattering as well.
So you can look at size and then
shape and numbers in this way.
So the cells are
flowing through your instrument.
You can also make
use of this to sort them based
on any number of qualities.
So for example, if they're
labeled with different fluorophores,
then u m for different aspects.
You can also, you know,
you could have something
targeting one cell type versus another.
Newly show up in different colors
and sort them that way by hand so you
can measure the size and the shape
and the granularity
of our DNA and
RNA contents are nuclear type ratios,
chromatin structure, total protein content,
the types of cell receptors you add,
calcium in, and many more things.
This is a very effective instrument
for studying types yourselves in the sample.
Ok, so some limitations
of fluorescence include,
as I've mentioned a couple times,
this inner filter effect.
So can you think about
your queue that are your sample holder?
If the light, incident light is
traveling deep into a concentrated solution,
then a lot of the fluorescence
that is coming out to be
detected will also be scattered by things
in solution and our passively reabsorbed.
And so this is why
the measurements per layer only linear
and quantifiable when they
absorbance is quite low.
So when the concentration of
analyte and solution is quite low.
And as I said before,
that's being less than 2%.
Well, of course he and the Lasso
excitation light as well as,
as the loss of the fluorescence detection.
So concentration quenching can also
occur if the fluorophores
are in close proximity,
then sometimes they will.
They will drop back down to
the ground state without emitting light.
And this is especially true in
the flow cytometry setup I just showed you.
And light scattering, we need to be careful.
So Rayleigh scattering where
there's no change in wavelength,
especially for small structures and
Raman scattering that happens at
longer wavelength to the solvent.
Or both things you have to keep,
whoops, keep an eye out for.
Again, I'm cuvette
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
material
and sovereign effects can play a role.
So certain key that's
contain ultraviolet absorbers,
which might also fluoresce.
And so you need to be sure to pick the rates.
Cue
that for the experiment that you're j.
Similarly for solvents.
And there's a sample matrix effect
that you need to be aware of where
other interfering fluorophores that
may be just naturally
occurring in your bio sample,
including proteins and bilirubin
in your urine and serum, for example.
So, but you can overcome these by, let's say,
looking at the wavelengths
cluster 300 nanometers so you don't
excite the proteins and
scattering by proteins and
especially lipid lipid vesicles
in your solution they
will do a lot of light scattering.
Fluorophores may also absorbed to
the walls of the key that and
photo decomposition will
occur for fluorophore if
it's being constantly excited.
So these, some of
these can be prevented then just
by proper vessel selection
and adding wetting agents so they
don't stick the walls and minimizing up to
temperature effects
also needs to be considered.
So the quantum efficiency
increases with temperature.
And then on the other hand,
fluorescence decreases
with increasing temperature
due to collisions and quenching.
So there's a bit of a trade off there.
And photo decomposition,
as I just mentioned, as the problem.
So we want to use the longest wavelength
possible to get
a good signal, less lower energy.
The shortest duration,
and store it in the dark,
remove any dissolved oxygen.
And note that lasers, of course,
are always going to
be a trade-off of sensitivity and
thus photo decomposition because they're
high-energy and can really
cause damage to the 44.
Ok. And then I think are less,
well, which are
almost last topic in this chapter.
Luminol Mitre, where rare and looking at
their ways of light
being emitted from the molecules of interest.
So what we talked about
fluorescence from the molecule itself,
but, and when,
when it's excited by a light source.
But we can also chemiluminescence
where the molecule may emit
light as a result of a chemical reaction.
So for example, this happens often
with some types of redox oxidation reactions.
And I think I mentioned earlier
slide about the NADH being,
or NAD being reduced to NADH,
which wiped from non absorbing to absorbing.
So
now we have scenarios where a molecule
might go from not fluorescing.
So enzymes are quite common.
We have alkaline phosphatase,
LP or horseradish peroxidase, HRP.
As
far as being used often for this and
other metal ion or metal complexes
of copper and iron.
Chemiluminescence can be
really very sensitive, ultrasensitive,
even where we're going down to
animal arrange or
even septum molar range where
we're looking really at
light
emitted from only about 600 molecules.
So this is widely used in
automated immunoassays and DNA probe essays.
And here's a little schematic nine textbook
but basically shows,
let's say we have a antibody assay
where we have antigen on the surface.
We are trying to see if there
is antibodies binding to that.
There were very specific interaction.
And then we're going to
add a secondary antibody
to measure what's going on here.
In this case, sometimes it'll be direct,
but other genes obey a secondary antibody
has added that has an enzyme conjugate.
And then if we put a substrate into
a solution that is gets reacted,
acted upon by the enzyme,
changes from subject product,
it will emit light in the process.
So chemiluminescent linked immuno assays
also not as clear or are used in many,
for many difference essays.
But you note here that
they are also being used for search,
Gobi to antibody tests
and these are not the only ones.
Other examples
of some of the tests we've been
looking at have been
shown to actually this is,
I can click on this and
bringing their this website
as really
great because it shows all my tabs.
It shows all the tests
that have been approved and it's
updated every couple of
weeks through John Hopkins.
And it'll show you what the,
what each test is
measuring and what technique it's using.
So for example, let's find. Here we go.
Here's the chemiluminescent
immunoassay coming
out of an American company that has,
doesn't have, Oh yes,
it as sensitivity was found to be
about 87% specificity around a 100%.
And here Abbott Laboratories
is probably accompanying him,
you know, even more familiarly.
And in fact, I now they're using
Abbott antibody tests at the UCSD labs.
And here they are shy as
sensitivity of up
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
to a 100% after 14 days,
but obviously much lower at
early days and specificity of 99.6%.
Okay, so you guys can check that out too,
if you want to. By following that link.
And there's a whole bunch listed there.
Okay, so now
we've talked about chemiluminescence.
Here's a slide on electro luminescence,
where there's an electrical stimulation
or even an electric chemical stimulation,
where a presser precursor molecule is
stimulated to then excite
the molecule of interest.
And when the molecule emits
light upon relaxation of
the grassy that can be detected.
So ruthenium tris by period or chelate is.
Often used as a label for
this type of interaction.
And it will react with
tripe Coppola mean type of
oxidation reaction similar to
what the NADH I mentioned earlier.
And this will usually take
place at an electrode.
So advantages of this
technique are
that there's
a high stability for the reagent.
Simple preparation, enhanced
sensitivity.
This is down at Mars
and it has a really wide,
broad range of, of linear response.
So over six orders of magnitude.
Here is a little schematic.
This, again not shown in the textbook,
but this is a vitamin B 12 assay.
We will now look at more details.
You can check out that journal article
here where they're
showing the electrical signal
as a function of concentration.
Or we have your vitamin D being
introduced and then tagged
with them, streptavidin, biotin pair.
And then he'll introduce
the factor labeled with
the electrochemical action label
and then heavier competitive reaction.
And bring this into
contact with an electrode that will
stimulate the change in
the label so that it is here, blown up here.
So you get your redox reaction and
then this can be
detected as light coming off.
So there you go, lay monolatry.
Okay, now it's
the last examples of optical techniques.
From chapter nine. We're going
to look at light scattering,
which is broken down into
turbidity and nephelometer J.
So, you know, and
light collides with a particle,
it can be scattered.
And if it is scattered,
the scattering
will be
dependent on the wavelength of
the light by inversely
proportional to the fourth power event.
And also by the distance r,
where they intend C, I mean being
proportional
to the distance
the scattered light has to travel.
So how far away the detectors and
this wavelength dependence means
that blue light will scatter more than red.
So this also effects where we
measure absorbance because we want to
avoid having scattering as a,
as, as a error
introduced and say absorption measurements.
Okay, but more and more interest
is how it compares to concentration.
For small particles.
We find it's proportional to
the concentration for larger particles
are going to be looking more at how
the light scattered light
varies with size and shape.
So because of this one over r squared,
we know the detector must
be close to the sample.
And then turbidity.
We're looking at scattering in
a way that's similar to absorptions.
So we're gonna look at light incident on
the sample and see
what's coming out the other side.
So we're basically measuring the decrease in
the intensity of the light
coming out than what it went in.
So that's shown here is I naught
times e to the power b t,
where again, b is the path length.
Ands. And t is
really the measure of the tour to turbidity.
So a couple of things
we have to look out for.
Well, here are antigen excess and matrix X.
And so let me come back to that in a second,
but let me just say because this is
also an issue for nephelometer tree,
how it differs from
turbidity is that you're basically
looking at the scattering coming off at
an angle rather than
what's passing through the sample.
Here it
shows setting up a detector either at
some angle like 30 degrees
or at right angles are 90 degrees.
So these can actually make,
combined with Florida matters,
of course, to measure
both scattering and the fluorescence.
But the main difference being
that for nephelometer tree,
you're going to be measuring
the wavelength that is the
same as the excitation wavelength.
Whereas in fluorescence you're
measuring something that's stoke shifted.
So in both of this gathering techniques, we,
we are often using it to get a measure
of the size and
concentration of particles in solution.
And the components are
essentially the same as we've seen in the,
are there optical techniques
where you have your incident light source?
If some kind of filter you have a sample,
you have more
more filters.
Ams detector setup somewhere
on one of these avenues.
And so
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
again,
being careful with an inflammatory
had the same thing,
antigen excess, which we're
gonna talk about more in chapter 15.
But basically, at a certain point, you know,
turbidity will increase with the addition
of antigen to antibodies.
But then at a certain point
after maximums reached,
it starts to decrease.
So you have to make sure that you're
not in that range when you're,
when you're measuring your sample.
And, and then matrix effects.
So basically just there's
all kinds of things in the matrix that might
also be causing scattering
other particles off the
solvent or especially a problem with theirs.
In a little payment serum solution
where there's lipo proteins
are just lipids in general.
So you try to minimize
any like looping interference.
Ok, so that's the end of chapter nine.
And we're going to
move on to chapter ten in the next lecture.
I will see you or talk to you there.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help