Topic 3: Validity {by 9/10}

Based on the text readings and lecture recording due this week consider the following two discussion points: (1) Discuss your understanding of criterion-related validity (also known as: Prediction or Instrument-Criterion).  In you discussion, include why this particular type of validity is common/important for mental health assessments.  (2) First, discuss the difference between convergent evidence validity and discriminant evidence validity.  Second, provide an example of a hypothetical or real assessment where these two types of validity would be important – very common for many mental health assessments (hint: listen to my example in the lecture recording).

 

Your original post should be posted by 9/10.  Post your two replies no later than 9/12.  *Please remember to click the “reply” button when posting a reply.  This makes it easier for the reader to follow the blog postings.

78 Comments (+add yours?)

  1. Carly Moris
    Sep 06, 2020 @ 12:32:57

    Criterion-related validity has to do with a degree to which an instrument predicts the future. This is important for instruments that were developed to predict or identify something. There are two types of criterion-related validity, concurrent validity and predictive validity. The difference between these two are the amount of time between when the instrument is given and when the criterion info is gathered. In concurrent validity there is no period of time between when the instrument is given and when the criterion info is collected. This type of validity predicts behavior based on the current context and is used to make immediate predictions. In predictive validity there is a period of time between when the instrument is given and when the criterion info is gathered. This period of time depends on what the instrument is designed to predict. For criterion you need to look at what the criterion is and what it’s psychometric qualities are. You want to make sure the criterion that is being used is actually a variable of interest, its reliable, free from bias, and immune to criterion contamination. If you don’t have good criterion you won’t be able to accurately determine validity.
    Criterion-related validity is important for mental health assessments because it tells you how valid the predictive power of the assessment is. In mental health we want to be able to make predictions about a client’s behavior. A diagnosis is a type of prediction that uses concurrent validity because it is making a prediction based on the current context. Criterion validity is important for predictive assessments we may give our clients because we want to know that assessment is actually predicting the variable we want it to predict. This is important for things like suicide assessment, we want to know that the assessment actually predicts if a patient is likely to commit suicide. If the instrument doesn’t have high validity it could predict that people who aren’t likely to commit suicide are,and that people who are likely to commit suicide aren’t. This would be very bad. Criterion-related validity is essential for predictive instruments. We cannot make informed decisions about treatment if our assessments aren’t valid. Criterion validity is also important for counselors in a more informal sense. As counselors we often make predictions about the client from sessions, it is important that we look at what informal assessments we used to make these predictions. We can use the concepts in criterion validity to examine our thought process and make sure we are making an accurate prediction for the client.

    Convergent evidence and divergent evidence are important for evidence based on relations to other variables. For this we want to analyze the instruments relationship to other variables. To do this we use the correlation method which examines the relationship between two variables. This tells us the degree to which an instrument is related to other variables. First you want to select a group to use in the validation study. Then you need to administer the instrument and gather criterion information which would include information about the variables you want to compare to the instrument. Then you would want to correlate the performance on the instrument with criterion info, the result of this correlation is called the validity coefficient. This is when convergent evidence and divergent evidence are important. Convergent evidence is when an instrument is positively related to other variables consistent with its use. We want the instrument to have a relationship with other similar variables. Discriminant evidence is when the instrument is not correlated or has a low relationship with variables that are incompatible. We don’t want the instrument to have a relationship with variables that aren’t a part of what the instrument is testing.
    For example if we were creating a new depression inventory we would want it to relate to other well established instruments that test for depression. So we would give our new instrument and a well established instrument to a group of people, and then correlate their scores from both assessments. This would be an example of convergent evidence because we would want our new instrument to have a strong relationship with the well established instrument. If there wasn’t a strong relationship this would tell us that there is a problem with our instrument and that it probably isn’t a good way to test for depression. We may also want to make sure our instrument doesn’t relate to other factors that aren’t depression. For example we may not want our instrument to relate to anxiety. So we would give our new instrument and a well established instrument that tests for anxiety to a group of people, and then correlate their scores on both instruments. This is an example of discriminant evidence, we would want the correlation to show no or a low relationship. You want the instrument to be able to discriminate or differentiate between depression and anxiety.

    Reply

    • Tayler W
      Sep 09, 2020 @ 12:02:23

      Carly, I don’t know that I agree with your statement: “Criterion-related validity is important for mental health assessments because it tells you how valid the predictive power of the assessment is.” Instead, I think it criterion-related validity tells you how strong the predictive power of the assessment is. A measure could predict someone’s suicidality, to use your example, about 50% of the time, and that doesn’t make it valid/invalid; it just means it is only right about half of the time. In that case, I think the counselor would have to determine how strong of a predictor they really need – for suicidality, like you mention, it’s really important to be confident in your prediction! Especially if an assessment might be used as evidence for something like getting a court order for involuntary treatment. I also don’t think that criterion validity is something done informally, not in the way we’re talking about it. My understanding of criterion validity is that it requires so much data to confidently predict something – and we would never have that in the clinician’s role for a certain client. That’s why we use measures, to make sure that we aren’t making predictions based on ourselves, which is to your point. In a clinical psychology class I took, one thing that we discussed at length was the necessity of not falling into the trap of assuming you are better at assessing your clients than the measures, because each counselor is prone to their own bias. I think you’re right in that it’s important to realize that measures, while they seem statistically overwhelming and “oh my gosh how are we even supposed to feel confident in them” have more data than we will ever have, and so we have to rely on them in that way!

      Reply

      • Carly Moris
        Sep 09, 2020 @ 21:24:32

        Hi Tayler! I agree with you I should have been clearer in describing the importance of criterion validity, you don’t want to use a word you are defining in the definition. I think you did a good job clarifying that an assessment can’t be valid or invalid, but that we need to look at the degree of validity.

        Also when I was talking about using validity informally I wasn’t talking about it in the sense that we would be better at assessing our clients than the measures. We do need to relay on measures but there will be times during counseling when we will be using other techniques. Page 72 of our textbook under conclusion on validation evidence talks about applying validity to all types of client assessment including “Counselors often make predictions about clients behavior and they should examine what informal assessments they are basing those predictions on and whether there is an accumulation of evidence that supports those hypotheses” and that “ Research tends to support the contention that practitioners who rely only on intuition and experience are biased in their decisions. Therefore, it may be helpful for practitioners to focus on the concept of construct validity in their informal assessment of clients”.

        Reply

  2. Elias Pinto-Hernandez
    Sep 08, 2020 @ 10:00:56

    (1) Discuss your understanding of criterion-related validity (also known as: Prediction or Instrument-Criterion). In your discussion, include why this particular type of validity is common/important for mental health assessments.
    We already understand the distinction between reliability and validity. To validate a test historically, developers and practitioners gauge or measure validity in three different ways: content, criterion, and construct validity. Criterion-related validity refers to the degree of efficiency with which a variable of interest (Criterion) can be predicted or unpredictable. In other words, into what level the instrument can predict certain behaviors. Concurrent validity and predictive-validity are the two types of criterion-related validity. The difference between concurrent validity and predictive validity is the period separating the two. Concurrent validity analyzes the data the same day that the assessment is administered to see if it correlates with the predicted behavior. Predictive validity, on the other hand, examines the individual tested scores (at a later time after the assessment) to see if the observed behavior corresponds to the gathered data. I have heard that psychology’s four primary goals are to describe, explain, predict to control, change, and or modify a behavior. Frequently, the clinician prefers a test that predicts future behavior. In the mental health field, criterion-related validity is impressible. In order to work on modifying behavior with a client, most of the time, the behavior should be predicted. We can use the text’s attempt suicide example. However, some professionals believe that when it comes to tests used by counselors, the most important validation is construct-related; due to its parallelism to the techniques used in the scientific method; therefore, the predictions are empirically tested. We would see it!

    (2) First, discuss the difference between convergent evidence validity and discriminant evidence validity. Second, provide an example of a hypothetical or real assessment where these two types of validity would be important – very common for many mental health assessments (hint: listen to my example in the lecture record.

    I see convergent evidence validity as an indicator of positive correlations between two or more tests that attempt to measure the same construct; (construct refers to an unobservable theoretical concept, e.g., aptitudes
    or intelligence.) Discriminant evidence validity, on the other hand, indicates null correlations within the instruments measuring different aspects. It is reflected that two instruments referring to two constructs that should be different have different results.

    A hypothetical scenario could be we administered a neuropsychological maturity assessment of 100 items to a group of 1000 five-year-old children (right!). A similar questionnaire is also given; then, we compare 10% of the sample’s answer, expecting a strong correlation, and proving the convergent evidence validity since the two tests measure the same construct.
    Now, to prove or establish discriminant evidence validity, we would administer something like a test to measure the emotional and behavioral disorders in children to our sample. Our hypothesis should be that the correlation would be weak or null. Then, we contrast the answers of our neuropsychological maturity assessment to the emotional and behavioral disorders test in children; and we should be able to prove our hypothesis.

    Reply

    • Tayler W
      Sep 09, 2020 @ 12:02:01

      Elias, I really liked your assessment of psychology’s goals, but I wonder if the control/change/modify element is overstated. I don’t know that the predictive power of any of our tests really gives us the data to control a client’s suicidality, for instance. It merely gives us the ability to assess what a good treatment might be, and what the most urgent problem is. I guess in a way that is controlling it, but I don’t think that treatment is reliant on predictability of a behavior. Most clients come to therapy to fix something that they already know (or someone has told them) to be a problem, so prediction isn’t the issue at stake, except to assess severity. Further, I wonder how this example would go towards something like personality assessments – is prediction the most important thing there, too? I feel like it isn’t! I think the powerful element of measures isn’t their predictive power, but their descriptive power, of which predictive power is a part. For example, if a measure of suicidality doesn’t actually predict if someone will make an attempt, it isn’t describing their active suicidality, instead it’s looking at ideations or some other concept. So the real issue is description, not prediction.

      Reply

    • Christina DeMalia
      Sep 11, 2020 @ 14:01:09

      Hi Elias,

      I like that you talked about some of the aims of psychology. Assessments with criterion-related validity are important for predicting behaviors or outcomes. However this is often the first step to making changes or modifications. Most clients will come seeking service with one or more presenting complaints. These will be things the client may be concerned about or want to change in their life. Especially with CBT, clinicians/therapists will look to identify unhealthy cognitive processes, and correct them in order to also correct the undesired behaviors or symptoms they create. An assessment that is able to predict behaviors correctly and accurately will assist clinicians in coming up with treatment plans. They will know what issues might be of concern, and be able to work to improve those with the client.

      I also appreciate your example for an assessment that might use convergent and discriminant evidence. I myself found it hard to find something much different from the anxiety and depression examples used. A neuropsychological maturity assessment being compared with emotional or behavioral disorders assessment as discriminant evidence is something I wouldn’t have thought of myself, so I appreciate the example a different way of looking at this.

      Reply

  3. Tayler W
    Sep 08, 2020 @ 15:16:24

    Criterion-related validity seem to me to be basically “can the instrument scores actually be used to predict the criterion?” This means, for example, does a high score on the suicidality scale (to use the example from the video) actually indicate that a person is suicidal? Or does a high score on the BDI equal a very depressed person? Without criterion-related validity, you can’t really use the measures in a clinical setting because they don’t tell you anything about the individual’s mental state/ability/etc. To use an academic testing example, if a person takes an “Algebra test” but it doesn’t actually measure their algebra knowledge (maybe it’s just multiplication), then it can’t be used to test out of Algebra I. To me, it’s a little confusing why a manual would need to go into criterion-related validity, because if an instrument is hailed as a measure of something, shouldn’t it be a base assumption that it measures that thing? But I do understand the need to test the criterion-related validity while developing a measure – otherwise, your measure can’t be used!
    Convergent evidence validity indicates if the instrument is actually correlated with other measures that theoretically measure the same thing. So, if a measure is supposed to indicate a person’s level of depression, logically it should be correlated with other measures that indicate a person’s level of depression. Conversely, discriminant evidence validity indicates if the instrument is actually not correlated with other measures that theoretically measure different things. So, a depression measure shouldn’t be correlated in theory with a measure for obsessive-compulsive disorder, because they aren’t the same trait or concept. Like criterion-related validity, convergent evidence validity and discriminant evidence validity give an indication of if the measure actually measures what it’s supposed to. These types of validity are especially important on measures that might have some overlap or comorbidity. For example, people often score have both depression and anxiety, but a depression measure shouldn’t correlate with an anxiety measure, because otherwise you can’t tell if an individual has both depression AND anxiety, or just depression.

    Reply

    • Beth Martin
      Sep 09, 2020 @ 19:16:59

      HI Tayler,

      I can empathize with the confusion as to why a clinician who isn’t doing research themselves would need to be concerned about criterion related validity! Like you said, it’s a given that, by the time you have a manual/materials for a particular assessment, it’s been proven to actually accurately predict future behavior (and as someone who’s worked with assessment before, it’s not something I’ve ever considered!). After the lecture and reading this week, though, I’m starting to get the impression that it’s to allow you to have as much information at your disposal as possible, allowing the clinician to make an informed decision whilst being aware of potential pitfalls. I’m not entirely sure why we’d ever come across a measure that only accurately predicts behavior 25% of the time (at least I hope we don’t), but now at least we’re aware of it and know to look for those values. I’m not sure if any assessment is 100% accurate, or even close to it, so knowing that you do have a margin of error in xyz area may make us more mindful to not completely rely on one measure alone when you’ve got the potential for someone to fall through the cracks. I hope that makes sense!

      I’ve really enjoyed reading your takes on the discussion questions so far, thank you for sharing!

      Reply

  4. bibi
    Sep 09, 2020 @ 12:36:22

    1. Criterion related validity refers to the extent to which the instrument you are using is related to an outcome criterion. Another way to look at this is how well can the instrument be used to predict a specific outcome or criterion. There are two specific types of criterion related validity including concurrent validity and predictive validity. Concurrent validity means there is no time lag in between when the instrument is given and when the criterion information is gathered which allows it to predict the behavior given the current context. This would be helpful in a given session if you were looking at crisis management with a suicidal patient. You would want to know at the given moment, how likely your client is to commit suicide. The other time of criterion related validity is predictive validity. Predicative validity has a lag in between the instrument administration and gathering the criterion content. An example with this is how likely a relatively stable depressed patient is to commit suicide in the future. You would want to know this in a mental health assessment because it might help you adjust your treatment plan down the line. It is important to have relatively high criterion validity because if a test only accurately predicts suicidality 50% of the time, you could end up making the wrong judgement and not protecting a patient when you should have or going into crisis management with a patient that didn’t need it.
    2. Convergent and divergent evidence validity is used when you are gathering validity evidence based on your instrument’s relation to other variables. For example, if you are developing your own inventory of depression, you would want to know how it relates to both other inventories of depression as well as making sure that it doesn’t relate to a measure for something completely different (for example, substance use disorder). Convergent evidence means that your instrument is related to other variables with which it should theoretically be related. If I am creating my own measure of depression, I want it to be highly correlated with the Beck Depression inventory meaning that I am accurately measuring depression. Discriminant validity means your instrument is not correlated with instruments from which it would differ. In the case of my example, I would not want my depression inventory to correlate highly with the substance use inventory because that is not what I am trying to measure (in this case I would want a really low correlation). That means that my instrument does not relate to a factor that it theoretically should not be related to. These two types of validity are going to be common for mental health assessments because it allows you to judge what exactly your assessment is measuring. If my instrument correlates highly with other depression instruments, I can assume that I am actually measuring depression. However, if my instrument (for depression) was highly correlated with substance use inventories, I might actually be measuring something else.

    Reply

    • Abby Robinson
      Sep 09, 2020 @ 19:14:16

      Bibi, I liked how you included the patient treatment plan when describing the different types of timing in the description of criterion validity. I think that it is important that this is included when talking about when gathering criterion because knowing the possible outcomes after administering the test is important. If there is a patient that is more suicidal and likely to have suicidal thoughts, there would need to be immediate attention or change in the treatment plan. I’m glad you added this because having a test with low validity when it says its going to measure suicidality with an outcome that isn’t appropriate for the client could obviously be detrimental. But, if the test was accurate and said this client had a strong suicidal or self harming thoughts, you could take action and change the treatment plan immediately.

      Reply

    • Lilly Brochu
      Sep 10, 2020 @ 09:23:35

      Hi Bibi,
      Within the mental health field, it is extremely important to gather information about your client to ensure their progress is going in a positive direction. Given the examples you have posted, concurrent validity seems to be the most helpful in understanding, diagnosing, and treating a client quickly. In a situation where the client was expressing suicidal thoughts, feelings, or behaviors, as the therapist, it would be the most efficient and safest way to administer an instrument and gather information through concurrent validity. However, given the same situation, if the therapist were to use predictive validity, it could have negative effects on the client’s progress due to the time lag. It is important that these measurements are indeed accurate because a therapist could make an incorrect decision, or take another approach, that can ultimately change the life of the client.

      Reply

      • Bibi
        Sep 10, 2020 @ 19:55:26

        Hey lily
        When i was talking about using predictive validity with the suicidal patient, I was thinking more about seeing how likely they were to commit suicide down the line but with a patient who wasn’t likely to commit suicide right away. Maybe predictive validity is the wrong type to use here? I might be using it wrong

        Reply

    • Brianna Walls
      Sep 11, 2020 @ 16:47:14

      Hi Bibi! I liked how you used examples of when concurrent validity and predictive validity may be used in the mental health field with a client. The point you made about determining whether or not the client is suicidal or not in that given moment is very important under those circumstances because it could lead to life or death in this case. Your examples really helped me differentiate concurrent validity and predictive validity, thank you!

      Reply

  5. Connor Belland
    Sep 09, 2020 @ 14:14:29

    Criterion Related validity is an important type of validity in that it is used when trying to measure how well a certain instrument predicts certain outcomes. Criterion validity is seeing how well as test can predict the future or predict what it is trying to predict. There are also two types of Criterion related validity: Predictive and concurrent validity. Predictive validity is when a period of time passes in between when the test is administered and when the criterion information is gathered. The only difference is with concurrent validity there is no time passed in-between test administration and criterion gathering. Criterion related validity is often used to predict behavioral outcomes of certain instruments which makes it very important to the mental health field. Most instruments used in the mental health field are used to measure patient’s behaviors, mental status, mental abilities, and so on. Which is why it is so important that these instruments have good Criterion validity because it could effect diagnoses. Validity itself is a tests accuracy or its ability to measure what its supposed to be measuring and Criterion-related validity is a big part of that.

    Convergent evidence and Discriminant evidence are both used as a part of the correlational method when comparing one instrument to another instrument. It is convergent evidence when a instrument is related to variables that it should in theory have a positive relationship with. It is Discriminant evidence when the instrument has no correlation with variables that it should in theory differ from. For Example, If I wanted to create a new instrument that tested for Anxiety levels in high school aged students. I would want it to accurately predict anxiety for students. So for it to have convergent evidence it should positively correlate with other measures of anxiety for children. For it to have discriminant evidence I wouldn’t want the instrument to correlate with tests that are measuring trauma or OCD because those are different from what I am trying to measure for.

    Reply

    • Abby Robinson
      Sep 09, 2020 @ 19:00:52

      Connor, I hadn’t thought of the final outcome of if a test has strong criterion validity or not affecting the diagnosis. But I liked that you brought that up because without a strong valid test for a certain disorder, someone may be wrongly diagnosed. If a test ‘says’ it’s going to measure depression but does not have convergent evidence, it could be interpreted that this client does have depression when in reality they PTSD (or something else). Having strong criterion validity may be helpful for testing because when it DOES have convergent evidence that may be a determining factor in someone’s diagnosis between two similar disorders.

      Reply

    • Bibi
      Sep 10, 2020 @ 19:58:55

      Hey connor
      I really liked your description of how criterion related validity is used in the mental health field. I felt like it really hit on the importance of why we use mental health assessments in patient treatment plans. Additionally your example for convergent and divergent validity was super easy to follow

      Reply

  6. Abby Robinson
    Sep 09, 2020 @ 18:52:00

    My understanding of criterion-related validity is to show or understand an instrument that is related to the outcome. The important detail is that the criterion-related validity tests how well the instrument can predict one’s behavior in the future. When measuring this particular domain, we as counselors wish to predict the future behavior of a client. One way to do this is concurrent validity, which is when there is no time in between when the instrument or test is given and then the criterion is gathered, this would be an immediate prediction. Another way to gather criterion information would be through predictive validity. This is very similar to concurrent validity but rather than gather the information immediately after the instrument is given, there is a “waiting period” before the information is gathered. For example, if a counselor were to give a test to measure or predict suicidal thoughts in a client with depression, if I administered the test and then immediately dove into questions about suicidal thoughts and if he/she was feeling suicidal right after they finished the test, that would be concurrent validity. If I were to give the same test out to a depressed client but waited five to six months to ask them how they were feeling or if they had any suicidal thoughts since the test was administered, this would be predictive validity. Another way to help describe criterion-related validity would be to include regression. Regression is important to criterion-related validity because it shows the relationship in variables. This can be helpful when gathering criterion information because the regression can determine the usefulness of a variable. If there is a strong regression line that usually means one of the variables has a strong relation to the other variable. This could be used in prediction, especially of typical behaviors that have strong relationships, i.e. the relationship between those clients who have depression have a strong prediction of also having anxiety because there is a strong relationship between the two. Criterion-related validity is common in the mental health field because it is important to be able to predict types of behaviors for the clients. It helps predict how the client’s behaviors are currently and/or their behaviors in the future. This is important because there are certain behaviors that can be dangerous to clients (suicidal thoughts or self harm) that need to be seen or predicted so that appropriate intervention can take place before they happen.
    The difference between convergence evidence and discriminant evidence is that convergence evidence in when two instruments are compared to each other; there is a positive relation between them. Discriminant evidence is when there is no correlation of variables with the two instruments and they differ greatly. In theory, you would want a high validity coefficient with convergent evidence and a low validity coefficient with discriminant evidence. An example of convergent evidence validity would be that both instruments that are being compared are measuring the same thing. So if I were to come up with a test to measure anxiety, it should have a high correlation to another well-established test that also measures anxiety. They are similar in that they tested the same things appropriately. An example of discriminant evidence validity would be that if I came up with a test for anxiety and I compared it to a well-established test that measures psychotic disorders, there would not be a correlation because they both appropriately test what they are supposed test, which aren’t the same thing. Here, there is a low validity coefficient, which is what we would want with discriminant evidence validity.

    Reply

    • Beth Martin
      Sep 09, 2020 @ 19:27:21

      Hi Abby,

      I really like your take on using regression to gather evidence alongside concurrent and predictive validity. For some reason, I tend to see that as an abstract concept (x variable is related to y, not actually something I think to apply to real-world situations), and the textbook didn’t really help me in that regard. Your explanation did, though! So thank you for that! Intervention is something that’s incredibly important, but can be really damaging if done prematurely/unnecessarily. I worked in crisis intervention, and seeing people jump the gun on what they were expecting to see in the future and intervening before necessary always/incorrectly always hurt the relationship between the client and the team, and I think you’ve highlighted another reason why it’s so important/common to have high criterion-related validity. It helps build trust between the client and clinician whilst stopping the clinician from using their own assumptions/beliefs and sometimes jumping the gun when you have concrete measures to use (and trust) instead.

      Thanks for posting!

      Reply

    • Brianna Walls
      Sep 11, 2020 @ 16:35:09

      Hi Abby! I think it is important to mention that with concurrent validity additional criterion does not have to be collected directly after the instrument is administered. In class yesterday Dr.V implied that it is more of a judgement on how much time has passed between the instrument and collecting criterion. For instance even if a week has passed between administering the test and collecting criterion this can still be considered concurrent validity because the individual who developed the instrument may indicate that with predictive validity that information was gathered months later. When the assessment is developed the individual should provide the time frame so you can understand the difference between the two.

      Reply

  7. Beth Martin
    Sep 09, 2020 @ 19:10:03

    Criterion-related validity is a measure of validity that looks at an entire instrument (or assessment/tool), and determines whether it is able to predict future, relevant, behavior. There are two aspects to this form of validity: concurrent and predictive. Concurrent and predictive are highly similar, but differ in one key way: time in between giving the instrument and gathering the information on how well it predicts future behavior. With concurrent validity, the instrument is given and then immediately (or as close to as possible) assessed for criterion-related validity (if it predicts behavior). Giving someone an instrument that assesses for whether a person is suicidal, and then immediately asking them if they are after they take the test would be an example of this. The instrument is used, and the clinician can immediately assess whether or not is has predicted potential future behavior. Predictive validity, on the other hand, has a larger period of time between administering the instrument and assessing whether it has predicted future behavior. A good example of this is are any form of college/graduate school testing, such as the GRE. The test is administered, and predicts how well an individual will do in a program based off of their scores. This prediction cannot be assessed until well into the future when an individual is actually in, or has completed, the program. The reason it’s so important to have high criterion-related validity in the mental health field is that clinicians do have to predict future behavior in order to assess risk in some of their clients. If you have a client that is you suspect may have suicidal ideation, you want to make sure that the measures you’re using accurately predict future behavior so you can provide support if they are truly at high risk of suicide. Diagnosis uses both concurrent and predictive validity to predict future behavior based on what we’re currently seeing in a client. As diagnosis directs treatment, it’s important to know that the measures we’re using have high criterion-related validity overall in order to accurately provide support and care tailored to behaviors we expect to see from said measures.

    Though they are both forms of construct validity, the difference between convergent evidence validity and discriminant evidence validity lies in how they are related to other measures. Convergence evidence validity measures how closely a measure correlates with other instruments that measure the same construct, e.g. if a measure is measuring depression, it should measure it as accurately as the BDI. Discriminant evidence validity measures the extent to which an instrument linked to instruments that don’t measure the same thing, e.g. a depression instrument should not measure anxiety. An example of when these forms of validity are really important are in cases where comorbidity of disorders are common, such as depression and anxiety. If you are trying to assess for depression and depression alone, you want to make sure that you have high discriminant evidence validity to make sure you are not accidentally assessing anxiety at the same time. Additionally, you want to make sure that your measure has high convergent evidence validity so that it is actually measuring depression, similar to other measures that have been proven to do the same thing. This gives you further confidence that your instruments are accurately reporting what’s going on with your client.

    Reply

    • Pawel Zawistowski
      Sep 10, 2020 @ 14:00:56

      Hello Beth,
      I appreciate your example and explanation on convergence evidence validity and discriminant evidence validity. I believe assessing anxiety and depression indexes can be tricky since some symptoms may overlap. For example, behaviors such as panic attacks are not limited to anxiety disorders and can be seen in other mental disorders as well, it is important that you have high discriminant evidence between the two indexes even though items regarding panic attacks may overlap.

      Reply

    • Christina DeMalia
      Sep 11, 2020 @ 14:18:33

      Hi Beth,

      Your example of the GREs for predictive validity and their ability to predict how well an individual will do in a program made me think more about what types of validity the GRE might have. From my experience, the GRE seems to lack face validity. If a test is aiming to measure how well someone will do in a graduate program, it seems odd to measure their skills from math that is typically taught between 7th and 10th grade. There are also many graduate programs that would have very little to do with English and yet half of the test is on those skills. I am curious if maybe the reason the GRE still has high enough predictive validity to be widely used is not because of what it tests, but what a person needs to do in order to score well. In order to score high on the GRE, most people will agree that you need to spend a long amount of time studying things like vocabulary words you wouldn’t normally use or equations such as the surface area of a cylinder. This knowledge may not correlate to the work a person would do in a graduate program. However, the skills of studying, working hard, preparing, and putting in effort to get the questions correct could all be skills that are also useful in graduate programs to be successful.

      Reply

    • Lina Boothby-Zapata
      Sep 12, 2020 @ 16:06:59

      Hi Beth,
      When you use the example in concurrent validity about given someone an instrument that the counselor can assess if the person is suicidal or not. I want to share with you that I was surprised in class when Doctor V stated that some times is better to implement the Beck Hopelessness Scale, instead of the suicidal ideation instruments. One of the reasons that Doctor V stated was the if the client has Depression one of the symptoms that as a Counselor, we could take a look at it is the motivation of the client in his/her life. This makes me think about how well we should be knowing the type of instruments of the instruments in the counseling field and also important is to develop our clinical skills in order to make decisions such as provide a diagnosis. Another thought that I have in this matter about diagnoses and the application of the instruments with high reliability and validity, in this case, concurrent validity is the as the word said tests are “instruments” meaning that is the Counselor who has the responsibility of select the appropriate instrument based on the client’s needs, implement and measure the test, communicate to the to client and take decisions. Hence, a reliable instrument that predicts behaviors provides to the Counselor with high content validity (results) that the counselor can utilize in his/her clinical practice. In other words, is not the instrument that provides the diagnosis of the client, is the counselor with the information that the test results have provided, and the knowledge and clinical skills that the counselor has developed during the experience. If you are trying to assess for depression and depression alone, you want to make sure that you have high discriminant evidence validity to make sure you are not accidentally assessing anxiety at the same time. Additionally, you want to make sure that your measure has high convergent evidence validity so that it is actually measuring depression, similar to other measures that have been proven to do the same thing. This gives you further confidence that your instruments are accurately reporting what’s going on with your client.

      Reply

      • Lina Boothby-Zapata
        Sep 12, 2020 @ 21:24:16

        Hey Beth, just to give a head up that I was doing my reply in a word document and I copied and pasted part of your post in my reply by accident.

        Reply

  8. Tanya Nair
    Sep 09, 2020 @ 20:43:42

    Criterion validity is one of the four types of validity that is able to use one measure to predict the outcome of another. For instance, if someone takes a performance test during a job interview and the test accurately predicts how the person does, this test is said to have criterion validity. There are two types of criterion-related validity: concurrent validity and predictive validity. Concurrent validity is how well a newer test compares to an older test and if they will give the same answers. Predictive validity tells you how well a certain measure can predict future behavior. All types of validity are important to mental health assessments however, criterion validity is important because it allows the clinician to have an accurate sense of what is going on. For instance, criterion validity can act as a diagnosis. Doctors use their experience to determine how your set of symptoms fit into what we know about mental health. A diagnosis is an important tool for you and your doctor and this is exactly the way in which it is important for mental health assessments. It is always good to have high criterion validity because if a test predicts suicidality only half of the time, practitioners could make the wrong decision which can cause a lot of negative things to happen. Clinicians are always forced to think of the future and it makes it easier when they have accurate criterion validity.

    Convergent evidence validity and discriminant evidence validity are both types of construct validity. They both look at the way in which two measures are related to each other. Convergence evidence validity measures how closely a measure correlates with other instruments that measure the same construct. For example, if a measure is measuring anxiety, it should measure it as accurately as the HADS-A. Discriminant evidence validity measures the extent to which two instruments should not be related to each other are, in fact, observed to not be related to each other. For example, an anxiety measurement should not measure PTSD. Convergent validity and discriminant validity are common for mental health assessments because they allow you to see what exactly your assessment is measuring. When assessments correlate with others that are similar then it shows that you are measuring what you claim to be measuring. However, if the assessments do not correlate together then as a clinician you know that you might overall be measuring something else.

    Reply

    • Pawel Zawistowski
      Sep 10, 2020 @ 11:57:29

      Hi Tanya, I think you may have gotten concurrent validity and convergent evidence confused. Concurrent validity refers to the timing, not how a new test compares to an old test. Concurrent validity is used for an immediate prediction (relatively speaking) and it tells us about the outcome of the instrument making a prediction just following taking the instrument. Again relatively speaking, it can be a week, if say your predictive validity is 3 or 6 months.

      Reply

    • Connor Belland
      Sep 12, 2020 @ 22:54:56

      Hi Tanya, I like your explanation of criterion related validity. Especially the example you used of using a predictive test in a job interview to help measure job performance. It reminds me of a more practical application of how the SAT or GRE tries to predict college performance. None of those tests are perfect but they do try really hard to have criterion validity.

      Reply

  9. Lilly Brochu
    Sep 09, 2020 @ 21:23:44

    Criterion-related validity measures how well the instrument is related to an outcome criterion. In other words, is the instrument an accurate predictor of a specific criterion, or does the instrument accurately predict the client’s behavior in the future? Furthermore, there are two different types of criterion-related validity, known as concurrent and predictive validity. In concurrent validity, there is no time lag between when the instrument or test is given and when information is gathered. Furthermore, it is used to make an immediate prediction of behavior, such as a diagnosis. An example of concurrent validity would be if a client were given a depression assessment and the therapist was able to obtain a clear prediction of their levels of depression right away following the completion of the assessment. On the contrary, with predictive validity, there is a time lag between when the instrument is given and when any information is gathered. An example of predictive validity would be a student taking the SATs for an admission into college and seeing if the SAT scores had a strong relationship with the student’s grades in college. If there was a strong correlation between the two variables, then there would be evidence that the SATs were predicting scores accurately. Criterion-related validity is common and important within the mental health field, education, and other domains because it can be used to predict one’s mental health, progress, or one’s academic performance. It is especially important in the mental health field because it can be used to make an immediate prediction of behavior, or diagnosis of a client. Without this, there would be no way of accurately predicting future behavior, and whether the client was being given the best and most supportive form of treatment being used was effective.

    Convergent evidence means that an instrument is related to other variables, to which it should be related to. For example, if the measurement is assessing one’s anxiety levels, then it should share a relationship or correlate with other instruments that measure anxiety levels, such as the BDI. A high validity coefficient is anticipated with convergent evidence because a low validity coefficient would convey that there is an issue within the instrument. On the other hand, discriminant evidence means that an instrument is not related to variables to which it should differ. An example of discriminant evidence would be if the instrument is created to measure one’s anxiety levels, and does not correlate with an instrument that measures depression.

    Reply

    • Maya Lopez
      Sep 10, 2020 @ 18:14:10

      Hey Lilly!

      Although I agree with your logic for your description of convergent validity testing and how one should compare it to the BDI to see if there is high validity as to be expected…. I do disagree the BDI would be the best choice because although it may measure anxiety, depression and anxiety have high comorbidity rates, and it also would measure depression which may corrupt the score and may not be a true convergent scale? Just another thing to think about is comorbidity disorders when assessing convergence and validity in general! I’ve definitely learned that so many factors go into ways to determine validity for assessments and not just as simple as the weighted scale example like weight 90 pounds at noon and making sure one still weighs 90 at 1:30pm. crazy!

      Reply

  10. Cassie Miller
    Sep 09, 2020 @ 22:08:16

    Criterion-related validity can be immediately associated with prediction. In other words, how well does the instrument you are using predict a certain outcome? For example, criterion-related validity has been examined in the GRE instrument since it is used to predict an individual’s academic success in graduate school. The strength of its criterion-related validity is very important since many colleges/universities use an individual’s GRE score to determine whether or not they are an appropriate candidate for their program. Criterion-related validity consists of both concurrent validity and predictive validity. The main difference between these two types of validity is the length of time that passes between when the instrument is given and when the criterion information is collected. When concurrent validity is used, for the most part, there is no time that passes between when the instrument is provided to the individual and when the criterion information gathered to test the strength of the instrument is collected. This form of validity allows for a prediction that occurs right away. For example, providing an individual with an anxiety assessment instrument and then immediately using these results to predict whether the high data score pointing to an anxiety disorder is actually predictive of that individuals behavioral tendencies. When assessing an individual using predictive validity there is a certain amount of time that passes between when the individual takes the instrument and when the criterion data is collected. For example, providing a mother that is four months pregnant with an assessment judging the likelihood that she will experience symptoms of post-partum depression and then following up with the mother after she has given birth to see if this instrument was valid in its predictions. This length of time can often allow the researcher to confirm or refute their initial instrument’s results based on whether or not the criterion information corresponds to the initial prediction. Criterion-related validity is very common/important when conducting mental health assessments because it allows us to come up with a baseline for our clients, as well as, track their progress throughout the period of time that they are receiving counseling. It is important to use this baseline prediction to assess whether their has been progress for the client or any changes to their initial goals/diagnosis. With this said, it is most important to make sure that the instruments you are using have high criterion validity so that you know that it has a greater likelihood of measuring what it intends to measure (allowing you to give your client the best treatment). If the criterion validity was low you would not be able to trust these results as much and could mis-treat/diagnose your client.
    When looking at convergent evidence and discriminant evidence the main difference is how each instrument relates to other variables. With convergent evidence it is important that the instrument relates to other variables positively. Furthermore, with this form of evidence you want a high validity coefficient. An example could revolve around an instrument you developed to help diagnoses individuals with Schizophrenia. You would want this instrument to be positively related to another well-developed/supported instrument that is currently used to asses Schizophrenia. If it is not positively related to this instrument this would not be a good thing and would point to an error in your instruments validity. The opposite occurs with discriminant evidence because here the instrument should not be correlated with other instruments that it should differ from. Using the same example, we would want my Schizophrenia assessment instrument to not correlate with an Eating Disorder assessment (I know this is a dramatic example, but bear with me). These two mental disorder instruments should share a very low validity coefficient, since their assessments should not correlate. In general, both of these forms of evidence should help to point you in the direction of whether your instrument is measuring what it is supposed to.

    Reply

    • Elias Pinto-Hernandez
      Sep 11, 2020 @ 10:35:57

      Hi Cassie,
      I totally agree with your example for post-partum depression. I am inclined to believe that If the clinicians detect a series of symptoms should administer an instrument that allows them to predict not only behavior but a possible disorder and follow up to confirm, diagnose and treat.

      Reply

    • Lina Boothby-Zapata
      Sep 12, 2020 @ 16:03:53

      Hi Cassi,
      Reading your part about Discriminant Evidence Validity makes me reflect a little bit more with your example. Currently, I am thinking about Bipolar Disorder and Major Depression Disorder. Both disorders have symptoms that they overlap and sometimes Bipolar Disorder could be misdiagnosed especially if the Counselor doesn’t give enough time to the therapeutic process to begin to observe maniac behaviors. Then, these two assessments are supposed to show high discriminant evidence validity. Now the Bipolar Disorder Assessment should show efforts on differentiating from as an example the Beck Depression Inventory Test II. I am guessing that in this situation will be adding the criterion domain of maniac behaviors because depression symptomatology could overlap in both tests. Now, if I am not mistaken in your example; between Schizophrenia/Mental Health Instrument and Eating Disorder Assessment has to be a high discriminant evidence validity and low concurrent validity. Now, I guess your example is clear with two opposite criterions domains, the consequence is that the discriminant evidence validity is high, but I am not sure how will be the Discriminant Evidence Validity with instruments that could measure one Depression and the other Bipolar Disorder knowing that they have depression symptoms that overlap in both disorders.

      Reply

  11. Anne Marie Lemieux
    Sep 09, 2020 @ 23:42:19

    The reason prediction or instrument criterion are so important to mental health is that they can potentially be used to apply appropriate timely interventions. If an instrument-criterion assessment has a high validity it can predict possible future behaviors which can lead to more accurate treatment. This can be especially useful in predicting and preventing suicide attempts. In thinking about this question it led me to wonder about the social justice issues our country is facing. I wondered if police officers were given these types of assessments to rule out potential aggressive behavior. It was interesting to learn that in Massachusetts where police brutality is low, officers are given MMPI-2 assessments as well as CPI personality tests to help indicate issues. However, in Minnesota where George Floyd was killed, trainees are only required to have an interview with a licensed psychologist.

    The difference between convergent evidence validity and discriminant evidence validity is that convergent validity shows a clear correlation between two or more assessments testing the same thing. For example if my child is tested using the WISC-V and then the Standford Binet intelligence scales and the tests are congruent it has convergent evidence. Discriminant evidence validity shows that measures that should not be related aren’t. However, there is no set cut off for how high or low the intercorrelations need to be. I would like more information about discriminant evidence, as I find it confusing.

    Reply

    • Lilly Brochu
      Sep 10, 2020 @ 09:22:10

      Anne,
      Thank you for bringing a different example to this blog post. I think it is important to incorporate social justice into this conversation because of the state of our country today. Your comments about whether or not police officers (or even other high authority figures in our society) are assessed to predict or prevent any further aggressive, inappropriate, or fatal actions or behaviors does generate some important questions and concerns about our justice system. The psychological assessments that police officers undergo should be revised and changed in the states that experience high police brutality, but the probability of that happening is most likely low. ☹ Furthermore, these assessments should be administered throughout their careers to gather information and predict future behaviors. For example, there could be an event that triggers one to be more aggressive and without frequent assessments, this could be potentially dangerous to themselves and those around them. Great points, Anne!

      Reply

    • Elias Pinto-Hernandez
      Sep 11, 2020 @ 12:38:57

      Anne,
      I agree with Lilli’s reply. It was a good idea to incorporate the Criminal Justice System into the discussion. A high validity assessment can be used to screen out non-potential candidates. I have been in the states for only a few years, and I am not too familiar with the selection of the law enforcement agents here; however, in Puerto Rico, is it a requirement that every applicant take a psychological test. Nevertheless, when there is a shortage of candidates, the government administers the psychological test after the cadets completing the academy. I wonder if something similar happen in Minnesota. I had a professor in PR that was the person in charge of administering the test for correctional officers, and he complained about the practice. I guess that the end justifies the means. However, I believe a high validity assessment is a must to any law enforcement agent not only at the beginning but through their entire career to detect any possible disorder.

      Reply

  12. Pawel Zawistowski
    Sep 10, 2020 @ 01:03:10

    Criterion-related validity refers to how well the overall instrument can measure and predict an outcome. It tells us if the instrument is good at predicting a certain behavior, outcome, or criterion. An example of this is if SATs predict future academic performance. Criterion-related validity is important in counseling because mental health counselors use assessments to better understand their clients, what the client’s needs are, or which form of treatment may be best suited for that individual. For example, we may use an assessment which predicts if a patient is suicidal. Criterion-related validity will give us insight to how well an assessment measures the likelihood that a patient may in fact be having suicidal thoughts or may even follow through with suicide. Concurrent validity and predictive validity refer to the timing of an instrument’s prediction. Concurrent meaning there is no time lag and there is an immediate prediction. Whereas predictive validity refers to time lag between now and when the counselor follows up with a patient to see how they are doing. For example, if a patient was to take an anxiety inventory, and a counselor was to ask what triggers their anxiety following taking the assessment, that is speaking to the concurrent validity (how well is the instrument making the prediction just following the assessment). Whereas the predictive validity refers to when the counselor follows up with his/her patient after an extended amount of time (e.g. 3 months.)

    Convergent evidence validity is used to measure the similarities and correlation between instruments. Discriminant evidence validity is used to do the opposite, it measures if the two instruments are testing for different variables. For example, if I was to take a bipolar inventory and OCD inventory and the variation coefficient was high it would mean that they are too similar and are essentially measuring the same thing– are not discriminating against each other. This would be a bad thing because they are supposed to be designed to diagnose different disorders. However, if I was to compare the two and the validity coefficient was low; it would mean that the tests are doing what they are suppose to be doing which is measuring different variables. If I was to create a new schizophrenia inventory, one way to check if it is measuring what it is suppose to be measuring, is to put up against other schizophrenia inventories and collect convergent evidence. In this case, convergent evidence with a high validity coefficient will be able to tell us that the two are in fact measuring the same variables and it is evidence for the instrument being effective.

    Reply

    • Cassie Miller
      Sep 10, 2020 @ 18:09:26

      Hi Pawel,
      I think your example of criterion-related validity and an assessment instrument measuring suicidality brings forth a very important point. Even though you used this example to further explain concurrent validity and predictive validity, it also allows us to consider why measuring an instruments validity is so important in the first place. For behaviors that are high risk, such as suicidality, it is so important that the instruments we are using to predict an individual’s future behavior have high criterion-related validity. Even though, we cannot completely eliminate all standard error of estimate we need to make sure that it is very small when using these instruments to asses an individual’s potential risk/possibility of harm to themselves. If an instrument that was used to predict this behavior had low criterion-related validity, we would not be able to trust the results, as doing so would be neglectful and extremely unprofessional.

      Furthermore, like you mentioned in your description of convergent evidence and discriminant evidence, we could use other recent and scientifically supported instruments to compare to our results as well; this being another way to examine our own instruments validity.

      Reply

    • Connor Belland
      Sep 12, 2020 @ 23:33:11

      Hi Pawel, I really like your example of how criterion related validity is used in the clinical setting with using predictive tests with patients to get a better understanding of them, because really, understanding the patient is one of the most important parts of being an effective therapist. I also feel like predicitve tests like depression inventories can almost move therapy along faster, because if you already know a patient is depressed based on a test then you dont have to spend the time figuring that out through just talking with them.

      Reply

  13. Karlena Henry
    Sep 10, 2020 @ 12:48:37

    1: ​Criterion-related validity examines how well an instrument successfully predicts the outcome of the behavior being measured. This is important, but doesn’t immediately void the effectiveness of the measurement.
    The example the book uses is examining the relationship between SAT scores and GPA for admission into higher education schools. As you mentioned in the lecture, for schools with a high application rate, there needs to be set minimum criteria to eliminate applicants from consideration. It reminds me of a conversation with Sid Dalby, the head recruiter for the Ada Comstock program at Smith. She explained for the vast majority of applicants, their applications would go through a scenario I described above, but for applicants for the Ada Comstock (non-traditional) program, taking SAT scores into consideration would (in most cases) be impossible. When I was applying to Smith, it had been over 20 years since I’d taken the SAT, and my knowledge base was completely different. If they required a current SAT score, I probably wouldn’t have applied at all. So, for the non-traditional aged applicants, the admissions committee weighed community college GPA and their essay as primary criteria for admission. When I was accepted, I had the same expectations as the traditional students for coursework. In these two scenarios, if they used the same evaluation, I wouldn’t have been considered at all, since I didn’t have the requirements for application, and the same would hold true for the traditional-aged students. Expecting them to have a wealth of life experience would be (in most cases) impractical, as at 17 years old, they didn’t have the same time as a 35-year-old to develop their experience base.
    The importance of having criterion-related validity is especially important in psychological testing because in many cases, the subject of interest can be very serious in nature. Say, for instance, a client is presenting with suicidal ideation. If we give them a test designed to measure depression instead of suicide, it would lead us to believe what they were presenting with was inaccurate, and the delay might give them the time to follow through, and have an attempt.
    2: Convergent evidence considers the relationship between instruments that are evaluating similar behaviors as a test of validity. For example, say we as a class are creating our own instrument to measure depression. One of the tests we could perform is to have our subjects take the Beck Depression Inventory. When we are conducting the experiment, our subjects would take both instruments, and If we put the two scores on a scatterplot, there should be a visual relationship between the two tests, complete with regression line. The Beck has enough evidenced based validity that if there was a problem, presumably the issue would be with our class design, and would give us the opportunity to revisit our instrument and make adjustments.
    Discriminant evidence is the flip side of convergent. Take the scenario I just discussed, but instead of having the subject take the Beck Depression Inventory, they take the Beck Anxiety Inventory. If there is a strong correlation, then it would demonstrate our instrument is measuring anxiety, not depression. This would either direct us to change the focus to anxiety instead of depression, or we need to adjust the design. If when they take both, there is no correlation, that would give validity to our design as a depression inventory. Combined with both these measures of validity, we can feel more confident in our instrument.

    Reply

    • Carly moris
      Sep 12, 2020 @ 21:03:28

      Hi Karlena! You brought up a good point with your school’s Ada Comstock program. While SATs are a traditional way to assess success in college, it wouldn’t have been a fair way to assess you or other people taking a non traditional college route. This is an example of why it is so important to look at the population assessments are meant for and see if they apply to the person taking them. If they don’t then the assessment results might not be an accurate representation of the person. Just like SAT scores wouldn’t be an accurate reflection of how you did in college. An assessment can have good validity for one population but not another.

      Reply

  14. Cailee Norton
    Sep 10, 2020 @ 13:51:56

    1. It is important to understand that a specific instrument has a degree of criterion-related validity. This examines the extent to which an instrument relates to an outcome criterion at a systematic level. Essentially it tells us the degree to which an instrument provides a solid prediction of certain criteria, that we can expect to predict a specific outcome depending on the instrument and what it is you are seeking. A common example of this would be the SAT in that the SAT is believed to be a good predictor of academic performance in a collegiate setting. It’s equally important to understand the different types of criterion related validity: concurrent validity and predictive validity. Concurrent validity shows no time lag between when an instrument is distributed to a client and when the criterion information is gathered. This provides an immediate prediction. This can be applicable to the mental health field if we are looking for an immediate diagnosis of a client who we suspect could be experiencing depression. Predictive validity on the other hand has a time lag between administration and collection of the criterion information. As a whole criterion related validity is commonly seen in that of mental health assessments because they allow us to predict that if the criterion meet then it can predict a client’s behaviors or current mental status. The application of such knowledge can directly affect the treatment that client receives, thus the application of instruments with good criterion related validity is necessary to make such predictions and courses of action. For example, using an instrument with traditional criterion-related validity might allow a counselor to predict whether their client will perform well in a specific career field based.

    2. Convergent evidence shows an instrument is related to other variables to which it should theoretically be positively related to. Discriminate evidence on the other hand shows there’s no correlation of variables between two instruments. The evidence presented is vastly different. Both types of assessment would be important in evaluating the validity of an instrument as you would want there to be convergent evidence (showing it is positively related to other instruments with validity) and don’t want discriminant evidence that shows there isn’t correlated variables within the instruments structure. An example of this would be in that if I had a client in which I wanted to test their depression, I would create an instrument to measure that and use that instrument in comparison to other established depression tests such as the BDI. If they are positively related then I know there is convergent evidence and that my instrument has good correlational validity. I would then feel confident in the use of my instrument. If I took that same instrument I created about depression and compared it to an established instrument that looks at substance abuse and found that if my instrument isn’t discriminant enough (in that it has a higher correlational measurement) from that substance abuse instrument I know there is something wrong with the instrument I’ve created in that it is showing discriminate evidence as I would expect it to have a low score.

    Reply

  15. Destria Dawkins
    Sep 10, 2020 @ 15:16:41

    1.Criterion-related validity measures how well one measure predicts an outcome for another measure. A good example would be the SAT. SAT scores are based on criterion-related validity because colleges are looking at these scores to see how well a student is expected to perform.
    2.Convergent validity takes 2 measures that are supposed to be measuring the same construct & shows that they are related. It shows how closely the new scale is related to other variables & other measures of the same construct. Discriminant validity shows that measures that shouldn’t be related, are not. When there is discriminant validity, the relationship between measures of different constructs should be really low. For example, you could take a new assessment instrument for generalized-anxiety disorder & compare it to other well established instruments of the same disorder, in order to see how well the two instruments correlate. Obviously, we wouldn’t want to take an anxiety assessment instrument & compare it to a schizophrenia assessment disorder.

    Reply

    • Destria Dawkins
      Sep 10, 2020 @ 15:20:56

      *schizophrenia assessment instrument

      Reply

    • Maya Lopez
      Sep 10, 2020 @ 18:54:36

      Hey Destria,
      I agree with your last statement that we wouldn’t want to compare an anxiety assessment to a schizophrenia one IF we were looking to test convergent validity, however, if we were looking to test discriminant validity then we WOULD want to compare the two and hope that the validity score is low! I think in your first example with the SATs you may have been thinking of what we learned last week with criteria domain tests that meant scores were all being compared to a certain criteria to be met such as scoring 70% correct instead of each score be compared to other peoples scores? Perhaps I’m wrong but there’s so many vocal words that all sound so similar! I think question 1 was talking about a type of validity test that measures how likely an assessment will give us an accurate measurement to predict a behavior or intelligence. Again, I might have just read things wrong and if so, I’m sorry! just trying to clear things up for myself haha!

      Reply

      • Destria Dawkins
        Sep 11, 2020 @ 10:29:09

        I agree! There are so many terms that sound so close to one another! But I am definitely going to recheck my response to make sure. Thanks! (:

        Reply

  16. Elizabeth Baker
    Sep 10, 2020 @ 15:32:35

    Criterion-related validity is used to identify how well scores of that instrument can predict performance (scores) on another instrument of the same criterion. It’s used to determine which individuals who have taken the instrument, will perform well in the area of that instrument’s interest. For example, if interviewees were given a test to assess how well they will perform their work responsibilities, the scores of that test would or SHOULD determine how well or poor the interviewees will do. This can also be used to see which interviewees would be chosen to continue onto the next wave of interviews or would be accepted for the job/position. This type of validity is very important in the mental health field because assessments can be used to determine whether someone fits the criteria for depression, anxiety, etc. It’s difficult to make a diagnosis by only having a verbal consultation with a client, especially if the client doesn’t know why they’re experiencing intense and/or harmful emotions or thoughts; so having assessments readily available to help us determine what clients may be struggling with, is very helpful. Even if the client knows that they might be experiencing depressive-like symptoms, assessments with good criterion-related validity will help confirm the diagnosis and confirm any other disorders that the client may be experiencing as well. In summary, assessments with good criterion-related validity help counselors come to an affirmative diagnosis when working with clients.

    Convergent validity measures if assessments are positively related to each other. To simplify, a new assessment would have good convergent validity if scores positively correlate with scores of other assessments in the same criterion. For example, if scores on an English assessment are high and positively correlates with scores on other English assessments, we can concur that this English assessment accurately measures English skills.
    Discriminant validity measures assessments that should NOT be positively correlated with each other. This validity tells us if items in an assessment of one criterion positively correlates with another assessment of a different criterion. Normally, we don’t want this correlation between non-similar assessments because we don’t want an English assessment testing skills in Mathematics, for example. We want an assessment to measure constructs in that specific criterion, not assess constructs that have nothing to do with that criterion.
    When giving assessments that measure depression severity to clients, for example, we want to make sure that assessment has good convergent and discriminant validity. The assessment should measure sadness (emotional regulation) and life stressors, for example, and should NOT measure how much time they spend singing in a day. Having items that measure depression (e.g., emotional regulation, life stressors, etc.) will better ensure that scores will positively correlate with other assessments that measure depression severity. Having items that measure things outside of depression (e.g., time spent singing, time spent sitting outside, types of games one plays, etc.) will probably hinder the scores from the assessment and may bring the counselor to come to an inaccurate diagnosis or no diagnosis at all.

    Reply

    • Lina Boothby-Zapata
      Sep 10, 2020 @ 17:37:26

      POST

      Doctor Whiston define Criterion-Related-Validity as the measurement of content validity of Instruments that they can predict behaviors, per example; the GRE test is intended to predict the performance of the students during their program, or the Armed Service Vocational Aptitude Battery (ASVAB) designed to predict performance in training and future jobs, this is like what is your skills and what you can to perform. There are two types of Criterion-Related Validity; concurrent validity and predictive validity. First, Concurrent Validity is characterized by the capacity to immediately gather criterion information, which means that the results can be provided after the test is answered. Hence, this instrument will offer an instant prediction. This is also important to keep in mind that counselors use concurrent validity when they want to use these predictions to support the diagnosis. On the contrary, predictive validity has the same function of predicting behaviors or any other domains, but the difference is the lag of the results. There is a lag between the application of the tests and the results, to illustrate is when investigators want to know the results of “females terminating a domestic violence relationship that they are currently involved after they did DV counseling.” This example shows us that it has to be a time-lag between the instrument is administrated and the moment of collecting the criterion information. One of the observations made by Doctor Whiston is that it is necessary to have a reliable criterion that gives consistency and with a minimum of unsystematic error if we want to predict.

      Evidence-based on Relationships with other variables is common/important for Mental Assessment. This assumption makes me think about possible situations that, as a counselor, we can walk in. The DSM V stated that “The common feature of depressive disorders is the presence of sad, empty, or irritable mood, accompanied by somatic and cognitive changes that significantly affect the individual capacity to function.” Now it is commonly known that one of the signs of depression in also suicidal ideations. Let’s play the scenario that the Counselor already diagnosed the client with Major Depression Disorder, and the client has not disclosed information about suicidal Ideation. However, the Counselor has observed that the client’s wrist had several superficial cuts. Hence, the Counselor, at his point, has a red flag accompanied by the client’s history of cutting and suicidal thoughts reported by his/her PCP. Now, the Beck Scale for Suicidal Ideation, which is designed as a Predictor Instrument-Criterion, will assess and provide to the Counselor with information about how far his/her client has gone with these ideas. First, providing with a screen suicide ideation, second providing information about if the client is indicating avoidance of death if presented with a life-threatened situation, and third if the clients have attempted suicide before. Having this information from a test that has a high validate measurement and high predictor like the Beck Scale for Suicidal Ideation, then the Counselor has the confidence and faculty to decide what type of intervention is necessary to do at the moment. Furthermore, the prediction of these behaviors will help us to reduce situations such as having a client that, in effect, has committed suicide while he was in counseling with you.

      Another common method for validation of the instruments is evidence-based on relations to other variables. This type of validation is looking to analyze the relationship between the instrument and other variables within the same instrument or between other instruments (This is not clear for me yet). Dr. V, in his recording lectures, used the example of depression and stated that indicators of depression could be feeling blue or lack of interest, also pessimism or sadness. As a counselor, what we really need to take a look at is that if the instrument that we are selecting contains variables that are in a relationship with the criterion domain that we want to assess, in this case, depression. Furthermore, if these are the variables or criteria that we want to assess in our client, hence, these variables that we are selecting need to be relevant and pertinent, as the author highlight. During these relations of variables, there two methods that the author present to us; first, the relationship between variables that are highly related to the domain criteria instruments, this is named convergent evidence validity. Second, where there is no relationship with the purpose of the instrument and other variables, this is named Discriminant Evidence Validity. As a consequence, counselors find variables that are consistent and inconsistent with the domain criteria.

      If we go back with the example of depression, as a counselor, what I am really looking for and need between Beck Inventory Test II and Beck Scale for Suicidal Ideation is convergent evidence validity. The Beck Depression Inventory Test II contains the variable of Suicidal Thoughts or Wishes with four statements around this criterion. Theses are the following; I don’t criticize or blame myself more than usual. I am more critical of myself that I used to be; I criticize myself for all of my faults; I blame myself for everything that has happened. Suppose my client has a middle or high score in the criterion of Suicidal Thoughts or Wishes and may well implement the Beck Suicidal Ideation. This test will amplify in a wide range of Suicidal Ideation. Hence, having two or more instruments that present convergent evidence validity will allow the Counselor to explore in a deep matter the red flags that the client present but not verbalize yet, in this case, Suicidal Ideation. Another thought is that test can identify symptoms and support the Counselor in the process of the diagnosis, additionally provide with information that probably the client was not ready to talk, but it was an urgent matter to communicate because the client was acting out. Contrary, Discriminant Evidence Validity will help us to differentiate between the instrument’s variables and think will help in the selecting process knowing what I can’t relate to what I am really looking to assess in my client.

      Reply

    • Anne Marie Lemieux
      Sep 12, 2020 @ 11:33:48

      Elizabeth, I agree that using a criterion-related validated instrument can take the guess work out of diagnosing a client. I can also provide evidence to validate a hypothesis a therapist may have or challenge the diagnosis. Thank you for clarifying that we want an assessment to measure what it is intended to and rule out that other criterion is being measured helped me to better understand discriminant evidence.

      Reply

    • Tanya Nair
      Sep 12, 2020 @ 15:23:23

      Hi Elizabeth, thank you for your post. It is extremely important to have assessments in place to access an individual compared to just relying on verbal consultations. There is often comorbidity with mental illness and it is a good thing to have accurate measurements that are able to screen out various things instead of grouping them together. Yes, I think that having a good alpha coefficient of all the questions and criteria in the scale will make sure that there is both convergent and discriminant validity.

      Reply

  17. Zoe DiPinto
    Sep 10, 2020 @ 16:38:56

    Criterion-related validity, more so than other types of validity, is important in the mental health field because it is based around determining future action. If a test has strong criterion related validity, this means that the test is a good predictor of information, diagnosis, symptoms, or behavior. For example, if depressed individuals took a strong criterion-related validity test that measured depression, the test would give a score that accurately predicted that they are, indeed, depressed. Adversely, if depressed individuals used an instrument labeled as a depression test that gave them a score that predicted they all had symptoms of anxiety, this would have a low criterion-related validity score. The accuracy of the prediction is very functional because mental health workers can use these assessments to strengthen diagnoses, look for warning signs that warrant future action (such as likelihood to commit suicide), or prove improvement.
    Both convergent and discriminant evidence fall under the category of “evidence based on relations to other variables.” Broadly, this means that the validity of one scale is tested by comparing its relationship to other variables that it should/shouldn’t be correlated to. In convergent evidence, the instrument is deemed valid if it is positively related to another variable. For example, an instrument that measures insomnia should be positively correlated with mood disturbance and irritability. The opposite is true for discriminant evidence. The instrument is valid if it is negatively related to another variable. For example, the instrument that measures insomnia should be negatively related to feeling well-rested or energized. If the instrument in either case does not relate as predicted to the second variable, we know the instrument is lacking validity.

    Reply

    • Nicole Giannetto
      Sep 12, 2020 @ 14:12:47

      Hi Zoe! I liked your how you explained why criterion-related validity is quite important to the field. You said that the “accuracy of the prediction is very functional because mental health workers can use these assessments to strengthen diagnoses, look for warning signs that warrant future action (such as likelihood to commit suicide), or prove improvement”. This describes my feeling that assessments are so important to the field, because it is measuring things that keep the study of psychology progressing. Having an instrument with high criterion-related validity is encouraging I believe because it opens up more space for to intersect the tools that we are able to share with people with what we know in the hopes of benefitting mental health overall.

      Reply

  18. Anna Lindgren
    Sep 10, 2020 @ 17:22:51

    My understanding of criterion-related validity is that it is how well an instrument can predict an outcome or relate to the criterion it is claiming to measure. For example, the BDI has good criterion-related validity because it is a very strong predictor of depression in a client. This type of validity is crucial to mental health assessments because we need to be able to depend on these instruments to actually measure what we’re trying to with our clients. If we were to use an instrument that we thought was measuring anxiety but it turned out to predict depression, we would probably not be any more informed on how to best treat the client who is struggling with anxiety.

    Convergent evidence validity is a way of comparing two assessments that are measuring the same criterion to see how strong their correlations are. Discriminant evidence validity, on the other hand, compares two instruments that measure completely different criteria, with the hope being that they have a low correlation, therefore proving that they are measuring different things. To give an example of this, let’s say that you wanted to create a new inventory measuring depression. Since it is a new instrument, you need to make sure it is really measuring what you want it to, so you compare it to a tried and true depression inventory like the BDI. The stronger the correlation between the two, the better your convergence evidence validity is. Now you want to make sure that your new depression inventory isn’t also measuring anxiety, something that has a high rate of comorbidity with depression. So you do another analysis, this time comparing your new depression inventory to a generalized anxiety inventory. You get a low correlation between the two, meaning that the low discriminant evidence validity has proven that your new instrument is in fact measuring depression and not anxiety.

    Reply

    • Zoe DiPinto
      Sep 10, 2020 @ 18:56:18

      Hey Anna! I liked your description of criterion-related validity. I thought it was very clear and I was impressed with how you knew a specific test that measured depression. You’ve done your research! However, I had a different definition of convergent evidence than you, and I’d like to express my thoughts. I believe you are right in your definition of discriminant evidence in which the instrument is measured against another variable and to be valid, it must have a negative relationship. And what you describe as convergent evidence may fall under the category, but from my understanding, the instrument being tested (in your example, depression) does not have to be compared with an instrument also measuring depression. I believe it can be measured against an instrument that is supposed to have a positive relationship with depression. So, an instrument measuring depression should have a positive relationship with an instrument measuring lack of interest in activities or lethargy. I hope this makes sense, thanks for your post!

      Reply

    • Destria Dawkins
      Sep 11, 2020 @ 10:34:58

      Hey Anna! I agree that it is very important in the mental health field, for professionals to make sure that the instruments that we use are going to give us the accurate results that we need to move forward, otherwise the instruments would just be useless.

      Reply

  19. Nicole Giannetto
    Sep 10, 2020 @ 17:35:48

    1. Criterion-related validity is designed to measure how well a specific instrument is able to predict criterion. This particular type of validity is common and important for mental health assessments because it indicates whether an instrument has the potential to create a long-lasting and positive impact on helping clients and clinicians. It also has the ability to inspire and motivate new and current research that tests instruments used in the field.

    2. The main difference between convergent and discriminant validity is that convergent evidence refers to when an instrument is related to other variables to which it should theoretically be positively related, while discriminant evidence refers to when the instrument is not correlated with variables from which it should typically differ. An example of convergence evidence would be an instrument’s depression measure that also shares a relationship with similar constructs. In this case, the instrument designed to measure depression would correlate highly with a secondary instrument that is also designed to measure depression. The importance of convergence evidence would be that it confirms for the clinician that the specific problem they are working on with their client is one that is known and fairly established in the field of psychology and medicine. An example of discriminant evidence validity would be when an instrument that is designed to measure depression was highly correlated with similar constructs found in an instrument designed to measure anxiety. Discriminant evidence is important to the world of mental health because it pinpoints connections between disorders that may have otherwise been overlooked which could negatively impact the client’s experience and treatment plan.

    Reply

    • Viviana
      Sep 10, 2020 @ 23:19:33

      Nicole,

      I like how you indicated that criterion-related validity instruments could offer the potential to create a long-lasting and positively affect clients and clinicians. Based on the scores of these instruments the clinicians can create treatment plans and have therapeutic interventions depending on the needs of the individual client. Therefore, clinicians should have a sound information about the client’s problematic areas and have a basic knowledge of the types of validities. As the textbook indicates, one correlation coefficient does not confirm or proof validity. Therefore, it’s important that professionals compare instruments against other instruments and seek the relationship between convergent and discriminant evidence validity before trusting the client’s scores. The motivation on clinicians you mentioned is important to humanization of the field. Also, clinicians should consider that diagnosis is important for treatment to be effective but this treatment should involve more than a code of diagnosis such as biological and environmental factors, strengths, prognosis, etc.

      Reply

    • Destria Dawkins
      Sep 11, 2020 @ 10:49:01

      Hi Nicole! I agree that when it comes to convergent validity, the two instruments should be positively correlated with one another and when it comes to discriminant validity, the two instruments should negatively correlate with one another.

      Reply

  20. Christina DeMalia
    Sep 10, 2020 @ 17:45:50

    (1)
    The definition of criterion-related validity given in our textbook is “the degree to which the evidence indicated that the items, questions, or tasks adequately represent the intended behavior domain.” My understanding of this definition is that criterion-related validity looks at how well the assessment is measuring the criteria it is supposed to be. A test could have high reliability, in the sense that the same people get the same score every time. However, if someone consistently gets a low score on as assessment that measures depression, but is actually severely depressed, the results are not valid because the score is not accurately measuring the criteria in question. There are two types of criterion-related validity, which is also referred to as prediction or instrument criterion. The first type is concurrent validity, which is when there is no time between the instrument being given and the criterion information being gathered. Predictive validity is the other kind, and is when there is a lag in time from when the instrument is given and information on the criteria is gathered. An example of this could be if an assessment was given out to high school freshman that was meant to measure whether or not the individual would graduate by their fourth year. To check if this assessment accurately measured the criteria, they would have to wait until four years later to compare.

    Since many assessments in mental health aim to measure a specific criterion that may be difficult to measure otherwise, the criterion-related validity is important to have for mental health assessments. If an entire depression assessment was focused on the person administering the test answering questions about how the individual looked, this would not be measuring the criteria for depression in a valid way. Having depression could result in changes in appearance such as looking tired, disheveled, weight gain or loss, etc. However, someone could be depressed and not show any of those physical traits. Similarly, someone could look tired, detached, and disheveled but not be depressed at all, and rather just be suffering from a night of no sleep due to studying. This is why it is extremely important that the criterion-related validity be high for mental health assessments. Someone scoring low on a depression assessment when they are actually depressed could prevent them from getting the help they need. Similarly, if someone scores high on a depression assessment, but that assessment is actually measuring the criteria for anxiety, they could receive the wrong diagnosis and therefore the wrong treatment.

    (2)
    Convergent evidence means an instrument is related to other variables that is should supposedly be positively related to. Discriminant evidence is the opposite, and is when the instrument is not correlated with variables from which it should be different from. An example of an assessment where both types of validity would be important is a test on personality. If someone was suspected of having histrionic personality, a personality disordered closely correlated with extroversion, an assessment might be given to test criteria for extroversion. If that assessment had validity based on convergent evidence, it would have a high correlation with scores from already existing, well established assessments for extroversion. This would show that the criteria that is supposed to be measured, correlates with the other assessments aiming to measure the same thing. If the assessment was valid based on discriminant evidence, its scores could be compared to well-established tests for other traits such as introversion or neuroticism. Since you want to be sure the test is only measuring extroversion, you would not want the results to be too closely correlated with a test for a different trait.

    Reply

    • Elizabeth Baker
      Sep 11, 2020 @ 23:05:26

      Hello Christina, I really enjoyed reading your explanation of criterion-related validity. It was clear and simplified! At first I had a little trouble trying to re-word my own explanation, and I still didn’t know how to fit in concurrent and predictive validity into it, but your explanation made it more clear for me. Thank you for that! As you said, these tests are very important to have so we can properly assess and diagnose client(s). Yes there are physical and internal features to each disorder/disability, but everyone can be affected differently. An individual who’s depressed could look healthy and walk around with a bright smile, but feel completely hopeless and have suicidal ideations when they’re by themselves. Even if someone’s walking around with a smile, they might have conflicting emotions that they’ve been trying to understand. Having this understanding in mind, it’s important to have assessments that help both the client and helper figure out what’s really going on.

      Reply

    • Nicole Giannetto
      Sep 12, 2020 @ 14:18:20

      I agree that being cognizant of both convergent and discriminate evidence testing is so key to developing a deeper knowledge of the instrument’s intended topic of measure, and the facets that intersect with it, such as symptoms and etiology. As clinicians, we would want to be well educated and informed when it comes to understanding instruments and extracting from it ideas to work on a treatment for an individual or group of people.

      Reply

  21. Laura Wheeler
    Sep 10, 2020 @ 17:46:43

    1. My understanding of criterion-related validity is that is measures how well an instrument relates to the identified criterion; criterion-related validity indicates if an instrument is a good predictor. There are two types of criterion-related validity: concurrent and predictive. The noteworthy difference between concurrent and predictive criterion-related validity is that concurrent can be assessed almost immediately, where predictive has a time lapse. For example, if an assessment is given, concurrent criterion-related validity would allow for a result immediately or very soon after where predictive would have a time lapse related to predictions. As an example, concurrent criterion-related validity would be extremely important in the mental health field in regards crisis assessments. If a patient is in crisis and shares suicidal or homicidal thoughts, it is important to be able to implement an assessment instrument that allows for immediate results regarding whether this person is at risk. Predictive criterion-related validity is important in the mental health field as well as it could be critical in regards to an instrument that measures that likelihood of a recovering addict having a relapse based on a measurable factor. The example in the text regarding marriage was one that seemed to very simply illustrate the process with predictive testing, being that if you have an instrument designed to identify couples that will remain married for x amount of time you administer the test just prior to marriage and again after x amount of time.

    2. Convergent and Discriminant Evidence are both exceptionally important in regards to understanding whether an instrument is related appropriately to the variable being tested. Convergent evidence shows whether an instrument is related to variables it should be positively related to. For example, a depression assessment should be positively related to other reliable/valid depression assessments. If a new depression assessment is created and it does not relate to established and successful assessments that are testing the same criterion, that would be concerning. Discriminant evidence shows whether an instrument is not correlated with variables from which is should differ. For example, a depression scale should not have a high correlation coefficient with an anxiety scale- the goal is for the depression scale to discriminate depression criterion from other things, such as anxiety- otherwise the data will not reflect depression exclusively. Discriminant evidence is particularly important in the field of mental health because there are an infinite number of factors that could impact instrument results/effectiveness and incorrect assessment results could greatly impact clients, diagnoses, treatment, etc.

    Reply

    • Tanya Nair
      Sep 12, 2020 @ 15:01:38

      Hi Laura, thank you for your post. Yes, I almost forgot about the time factor when describing the differences between concurrent validity and predictive validity. I think it is important that you bring this up as I did not in mine. Your examples clearly help define the meaning in a mental health setting and show how these two types of validity are important in the field. I think you have a good understanding of this material which was good for me to learn from. Thank you!

      Reply

    • Timothy Cody
      Sep 12, 2020 @ 19:53:57

      Hi Laura. Great summation on the two different types of criterion-related validity. In your opinion, which one do you think is more important and substantial in the mental health field? I would suggest that predictive is more important because as a therapist, it would be best to see long term results play out with a patient rather than immediate results that could potentially relapse overtime.

      Reply

    • Anna Lindgren
      Sep 13, 2020 @ 13:00:23

      Hi Laura!
      Thanks for these definitions and examples. I especially liked the example of needing a concurrent criterion-related validity for clients in crisis situations. It’s a practical use for sure, that if your client is in an acute state of distress you would need an assessment that would give you a behavior predictor right away, and not in several days or even months. I imagine, though, that a predictive criterion-related validity assessment would also be important for follow-up care after the initial crisis has passed.

      Reply

  22. Maya Lopez
    Sep 10, 2020 @ 17:53:03

    (1) My understanding of criterion-related validity or prediction instrument-criterion is that it is a type of validity measurement that aims to predict how an instrument or test can accurately measure an outcome or typically in our case it could be behavior. Basically is the instrument going to be valid and give us scores that are accurate and dependable. I also learned about concurrent and predictive ability which differed in the amount of time between the assessment and diagnosis or when the rest of the info is gathered. Concurrent much like its name is when there is not much time between the test being administered to the gathering of the results. Whereas, predictive takes more time from the administration to the diagnosis or seeing the conclusion of the instrument. It can be very important for mental health tests to be reliable and accurate. In some circumstances during therapy it can be more beneficial to have immediate answers in order for a diagnosis such as knowing if the client engaged in, or was thinking about, engaging in self-harm. An example of predictive would be the CAS in which one would take the test in the beginning of the program vs during graduation to predict how well one will do in the counseling profession.
    (2) Convergent evidence validity is when an instrument is related to another assessment in terms of variables or content such as the Hamilton (HARS) and GADQ which both assess anxiety. It would be a good thing to score high validity when comparing a new upcoming anxiety test to one of the ones mentioned because it would mean the new assessment is correlated in validity to empirically- based tests like the HARS and GADQ. Oppostionally, discriminant evidence validity would be if we compared the new anxiety test I mentioned above to the DTS which measures symptoms of PTSD, we would not expect the validity score to be high because they are very different things being compared and it would be good to receive a low validity score. It is important to know if an assessment scores high or low on convergent and discriminatory validity tests to know if your test will be a good predictor for what exactly you are trying to measure. One wouldn’t want to have an assessment trying to measure OCD and they find it is comparable to the BDI (scoring high on discriminatory validity) because unless we were trying to see about comorbidity between the two disorders it wouldn’t be helpful to truly know how likely that person has OCD alone.

    Reply

    • Cailee Norton
      Sep 12, 2020 @ 10:52:31

      Maya,
      I really like how you’ve worded your definitions. I’m glad you stressed the importance of both predictive ability and concurrent validity within the mental health field. I think your example of concurrent validity is good, but I think it’s important to apply the predictive relationship to our field as well. In a therapy setting predictive validity could be useful in predicting the likeliness someone would use positive coping skills taught to them during sessions at a future time. Let’s say we wanted to check back with that client after six months, so we would administer our assessment after we as counselors have taught them various coping mechanisms and then we would test them again after six months. I think you have a very clear understanding of this material, and like I said I’m glad you gave your definitions in such terms because sometimes these definitions really look alike. Great job!

      Reply

  23. Cassie Miller
    Sep 10, 2020 @ 18:23:13

    Hi Maya,
    I really liked how you added the importance of reliability in your response, since it is vital for your instrument to have this before even considering its validity. I also liked how you brought up the importance of concurrent validity because we tend to focus on how predictive validity can provide a more accurate way to asses an individuals progress in comparison to their initial instrument response. However, like you brought up, we often need instruments with concurrent validity because we do not always have this luxury of waiting to compare our clients initial instrument scores to their future criterion information. We need to be able to trust the validity of the instruments that we are using in the moment so that we can provide our client with immediate treatment, especially if their behavioral tendencies put them or others in harms way.

    Reply

  24. Timothy Cody
    Sep 10, 2020 @ 18:38:24

    (1) From my understanding, criterion-based validity is a type of validity that tests to see if a specific instrument would act as a predictor of a certain measure. A good use of this type of validity in real life is examination of GRE or SAT test scores in order to predict academic performance in college. This could be used to predict a person’s progression with a certain mental health disorder. For example, for someone battling depression, if an instrument of antidepressant is given, it can predict whether or not the depressive episodes will continue in the future, and if they do not then the instrument is valid. I think this is an important test of validity because as practitioners, we should not be administering instruments that will not have any indication of future progress of a certain mental disorder. We should be tracking for gradual progression to indicate whether the instrument measures what it is supposed to measure. One would not give an antianxiety medication and believe it will be a prediction for lowering depression levels, for that would not measure what it is supposed to.

    (2) Convergent evidence validity is when an instrument with a certain variable proves to have a positive correlation with an instrument of a similar variable. Discriminant evidence validity is when an instrument with a certain variable proves to not have any correlation with an instrument of a different variable. These are both types of validities except there is a positive relation in a convergent evidence and there is no correlation with a discriminant evidence. Since validity tests whether or not an instrument measures what it is supposed to measure, discriminant evidence proves that there is no validity amongst certain factors of an instrument. For example, an instrument that measures anxiety should not have a correlation with a separate instrument that measures Major Depression Disorder since it would then not be measuring what it is supposed to. An example of convergent evidence validity is when there is a positive correlation between my instrument of anxiety and an instrument that measures Generalized Anxiety Disorder. When discussed in the mental health field, it is important to determine whether a specific instrument correlates with a certain measure, and if there is comorbidity, these two evidence based validations would indicate whether or not a certain instrument should be administered.

    Reply

    • Zoe DiPinto
      Sep 10, 2020 @ 19:55:16

      Hey again Tim! Thank you for your statement on comorbidity. It helped me conceptualize the idea of convergent evidence. I was second guessing my understanding of the term because I was finding it difficult to accept that the positive relationship between two separate variables indicated validity after understanding that correlation does not equal causation. The concept of convergent evidence relies on external significant evidence that the two variables are positively related, but also distinguish the variables as separate from each other.

      Reply

    • Cailee Norton
      Sep 12, 2020 @ 11:05:49

      Tim,
      I’m so glad that you bring up the use of medication not being a good predictor in lowering depression if it is prescribed for anti-anxiety. The same logic is well applied to the instruments that we administer, and shows the importance of understanding the validity of our instruments rather than hoping it will do so simply because we’ve administered something to the client. You also mention a great example of instruments that shouldn’t correlate, making them discriminate evidence. It’s important that we check this validity as with your example of a measure of anxiety shouldn’t have a correlation with a measure that test for Major Depression Disorder. The real world impact of this could be severe as we could misdiagnose someone with anxiety when they’re being tested for something else entirely.

      Reply

    • Anne Marie Lemieux
      Sep 12, 2020 @ 11:17:19

      Tim, I liked your point that criterion based validated instruments can be utilized to track gradual progression. I think that as clinicians we may not be objective to the level of progress that is happening. It may appear that improvement is not happening but utilizing an assessment to clarify a client’s progress can give us an unbiased view of reality. It may just be that progress is not occurring at the rate we hoped but is in fact there but gradual. I also appreciate your simplification that discriminant evidence shows no correlation between variables that it should not be related to. I had a hard time grasping the concept of no correlation showing validity but through reading blog posts it became more understandable.

      Reply

      • Timothy Cody
        Sep 12, 2020 @ 19:19:22

        Thanks for your comment Anne Marie! I think another good way to look at the discriminant evidence validity is to look at cases where a misdiagnosis is given. Take the example I gave here someone is given an instrument for anxiety and a correlation is found with an instrument for Major Depressive Disorder. This could lead to a Psychologist diagnosing a patient with Depression when they do not have it. As we learned from class, this is called False Positive. We should be comparing instruments according the same variables in order to test for Validity.

        Reply

  25. Brianna Walls
    Sep 10, 2020 @ 18:39:41

    1. Criterion-related validity also known as prediction or instrument criterion is the extent to which a measure predicts a behavior either in the future or right now. An example of criterion-related validity would be the SAT, if this instrument had high criterion-related validity it would predict the individual’s academic performance in college. There are two types of criterion-related validity, concurrent validity and predictive validity. Both measure the same thing, the main difference between the two is the time lapse between the time the instrument was given and when additional criterion information is gathered. In concurrent validity there is no time lapse and is used when we want to make an immediate prediction, such as the diagnosis of depression. Concurrent validation is predicting behavior based on the current context. An example of this would be administering an instrument that measures whether or not the individual is suicidal or not and then asking the individual after if she/he is suicidal. On the other hand, predictive validation is used to predict future behaviors. An example that was used in the book was if we wanted to develop an instrument that could identify couples who will stay married for ten years or more the test would be administered before the couple were married and then you would wait ten years to gather additional criterion evidence (if they stayed married or not). Criterion-related validity is common/important for mental health assessments. This is because sometimes as mental health professionals we are looking for an instrument that predicts future behavior. This is helpful in the mental health field because as clinicians we want to provide the best care possible to our clients. For instance if a client is having suicidal thoughts you want to make sure you are aware of the severity in order to provide proper care to the client. If the instrument you administer only predicts suicidality half the time this will not end in the clients favor.
    2. Convergent evidence is described as meaning an instrument is related to other variables within the instrument and should theoretically be positively correlated. On the other hand discriminant evidence is described as the instrument does not correlate with variables from which it should differ, or in other words you should be able to discriminate between dissimilar constructs. An example of discriminant evidence would be having an instrument that measures depression, this instrument should not be correlated in theory with a measure of IQ because you want to be measuring their depression not their IQ. An example of convergent evidence would be if I wanted to develop a new instrument to measure depression I would want it to correlate highly with other depression instruments, this way I can be sure that I am actually measuring depression and not something else. If my instrument did not correlate or have a strong relationship this would let me know that there is a problem with my instrument and that I may be measuring a different construct and not depression as I intended.

    Reply

    • Viviana
      Sep 10, 2020 @ 22:22:48

      I believe there are circumstances in which mental health providers need to have an immediate clinical reaction based on the client’s presentation and statements and as you mentioned on your post clinicians want to provide the best care possible to clients and the use of criterion-related validity serves for that purpose. I wonder in the mental health field what tool or tools clinicians can utilize to determine what instruments are adequate to assess certain suspected diagnosis. I would not think that clinicians go wonder around thinking and reading every instrument if there is reliability and validity but rather have a list of tests that the maybe certain organization present as appropriate. I would think instruments others than the standards which are well known in the field but also were created decades ago and even if they are being reviewed, it doesn’t happen often, then it encounters another glitch because the world is evolving and the population is more diverse and inclusive and along with it diagnosis stablished that time ago could somehow be adjusted by the human behavior. How about others instruments being created and are significantly stablished but there a lack of access from professionals in the mental health field but also could serve counselors to formally or informally assess the client’s level of behavior.

      Reply

    • Elizabeth Baker
      Sep 11, 2020 @ 23:05:49

      Hello Brianna, I enjoyed your SAT example. I think this is a good example of an assessment that doesn’t have true criterion-related validity. It has been brought up in class that the SAT’s or ACT’s don’t do a very good job at predicting performance in college. I understand that some colleges have a cut-off score to make the admission process easier, but I don’t think those types of exams predict how well students perform in college. There are many outlying variables that could’ve affected the test-taker(s) score (e.g., room temperature, test anxiety, feeling sick, lack of sleep, etc.). Regardless of their score, they could’ve performed much better than expected (if they had been admitted despite their SAT/ACT score).
      This validity is important in the mental health field because we need assessments that can accurately clarify current diagnoses, and assessments that can accurately predict future diagnoses. If we have assessments that can’t help us come to an understanding or definitive diagnosis, then we won’t be able to properly help our clients. In the end, we won’t be able to understand their current emotions and actions, or how they will/might affect them in the future. So yes, having assessments that have good criterion-related validity is VERY important in the mental health field.

      Reply

    • Anna Lindgren
      Sep 13, 2020 @ 12:51:35

      Hi Brianna!
      Your definition of criterion-related validity was really thorough and clear to understand. I liked the examples you included, and how you emphasized how important it is for assessments in the mental health field to be able to accurately predict behaviors as best they can. Without that, we would be flying blind, so to speak, and not as readily able to help our clients.
      I like your examples of convergent and discriminant evidence, but your definition makes it sound like you are comparing variables in the same instrument rather than comparing one instrument with another, as your examples state.

      Reply

  26. Viviana
    Sep 10, 2020 @ 20:36:56

    Criterion-Related validity is another way to measure a test’s validity to see whether an individual’s results or scores on that specific test closely matches his or her score on another test designed to measure the same thing. This other test that is being compared should be a standard test in order to aim for validity because this is a way to demonstrate that tests are valid. If the test taken scores are the same results as the standard or well-stablished test then there is a high criterion validity. Criterion-related validity is also known as prediction or instrument criterion and this type of validity is the extent to which scores predict a future behavior. The class textbook indicates that SATs is a good predictor of how well a high school student will do in college. If this instrument, SAT had high criterion-related validity it would predict the future individual’s academic performance in college and will have a high predictability to do well. There are two main types of criterion-related validity: concurrent and predictive validity. And the difference between these two criterion-related validity methods is the lapse of time between when the test is given and when the criterion information is collected. Concurrent validity is marked by the short time lag to gather criterion information. Now, a good manual will explain the time point is as it could be hours, days, weeks, months, or years. This type of validity allows to make quick predictions of future behaviors based on the current context. The variables for this predicted behavior should ensure that the specific criterion is being taken into consideration as well as its psychometric qualities as we want the instrument to predict useful information that could successfully help the counselor and consequently the client. Predictive validity functions to predict future behaviors as well but it differs in the time between the application of the tests and the results gathered. Criterion-related validity is commonly used in mental health assessment as counselors would expect the test to relate to others indicators of mental health need and consequently diagnose the client to use mental health services. Additionally, a test with high criterion validity will allow counselors to predict future behaviors and immediately create a treatment plan for the client to address the diagnosis in a short lapse of time and address

    Convergent and discriminant validity are the most common types of validity assessment instruments. Convergent evidence validity and discriminant evidence validity differ on that one would want a high validity coefficient in convergent validity whereas discriminant validity should have a low validity coefficient. Convergent is something that converges and in the validity angle it needs to make sure that the instrument is measuring what it claims to be measuring, meaning that measurements that should be related, are related, hence a high validly coefficient is what the scores should show. Because you would be able to show if the instrument is measuring what it’s supposed to be measuring compare to other similar ones. If an instrument is being developed, that instrument is to be compared to another well-stablished instrument we would want a high validity coefficient because if it’s low it means that the new instrument is not correct or something is wrong with it. In another hand, discriminant is to be able to discriminate between two different things and discriminant validity is measuring that what it should not be related, is not. For example, a child is being assess for aggressive behavior and this child is being observed in a playground to assess how he interacts with other children and if aggressive behavior is being demonstrated such as kicking, hitting, biting, etc. And if all that is observed is an energetic child that runs and screams of excitement then the child has low scores for aggressiveness, but if the written instrument showed a high score for aggressions, then we might need to look in a different direction such as probably ADHD instead of aggression.

    Reply

  27. Carly Moris
    Sep 12, 2020 @ 21:24:37

    Hi Viviana! You sound like you have a strong understanding of criterion related validity. You also have a good definition for convergent and discriminate validity. But I’m not sure if you example would be considered discriminant validity. While I agree If a child scored high on aggression on a written instrument and low during behavioral observation we would have to question the results of the assessment. Like you said he may not be aggressive, or there could have been something wrong with the behavioral assessment. Maybe the child doesn’t act aggressive on the play ground? One important part of convergent and discriminate validity is that you are comparing the measure you want to asses to a well validated measure. If both measures aren’t well validated you won’t be able to draw any conclusions about the validity. I think you would also want to use a large sample size so you can see if there is a correlation between the two measures.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Adam M. Volungis, PhD, LMHC

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 66 other followers

%d bloggers like this: