Location: CBC Campus -  SWL 208 Time: Mondays from 5:30-8:15 Week 08: 3/2/20 Topic and Content Area: Group Designs and Methods Reading Assignment: Kapp and Anderson chapter 10 Assignments Due:  A-02 Reading Quiz 03/02/20 A-04a: Weekly Journal 04 03/08/20  Other Important Information: N/A
 Checking in for the group work plan Key components for evaluation methods Threats to validity Types of group designs
Follow up with how people are doing, see if there is questions around group work plan
Later chapters we will be talking about…  Qualitative designs and applications Consumer satisfaction  Read ahead if these are models you plan on following.
 Any method for evaluation needs to include:   Sample selection Data collection Analysis Reporting
 What kinds of sampling methods. [Whole Class Activity] Discussion regarding what types of sampling methods planning on using for groups.
  History: Events that happen outside of evaluation or contextually during the evaluation that effect the event. (Corona Virus, people being laid off… etc)  Maturation and the passage of time: general growth that happens on it’s own. Especially true for children, but can be true for anybody.  Testing: Pre-test effects the outcome of the post-test.  Instrumentation: Change in the tools used to collect data during time of data collection (e.g. changing questions on pre-test/post-test)  Statistical regression: When there are significant changes (improvement / deterioration) that is based on their extreme behavior or position prior. (Think nowhere to go but up/down)
  Selection bias: Problems related to selection of participants (more random and larger sample better)  Experimental mortality and attrition: Not completing the intervention or process.  Ambiguity about the direction of causal influences: Direction of impacts and influencing conditions not clear. (Depressed causes lack of sleep or lack of sleep causes depression)  Design contamination: change behaviors or actions because of being evaluated.  Diffusion or imitation of treatments: looking for unique qualities which might be used by other professions (many professionals use strengths-based practice… not only ones that work in a “strengths-based program)
Interaction Effects: Threats to internal validity interact with each other.
 Defining and describing the intervention or program elements to be evaluated Establishing the time order of the independent variable Manipulating the independent variable Establishing the relationship between the independent and dependent variables Controlling for rival hypotheses Using at least one control group Assigning the person who are subjects in a random manner
Work in small groups to discuss potential evaluation or aspect of your group you could test by pre-test / post-test (even if you aren’t going to do this or “wouldn’t be able to” and create a simple example pre/post test
 Case study approach One group post-test design One-group pre-test and post-test Post-test only with nonequivalent groups Experimental design Matched comparison groups
 Are you going to use a group design for your program evaluation or what method will you be using? What type of group design method are you going to use? What are the challenges that you think you will encounter
Description: The group in which an intervention has been introduced is the focus of the study that will chronicle the progress and process of the gorup describing the changes (or lack of change) after the introduction of the intervention Strengths:  Detailed exploration Ability to understand complexity Rich narrative  Limitations:  No comparison group Case may not have same qualities as sample Difficult to weigh elements of narrative
Description: This design invovles the implementation of an intervention with a group of people whom that intervention wth a group of people for whom that intervention was designed, and then the adminstration of a simple test or other measurement to ascertain the results of that intervention. This can be described as an A-B design, with A being the pre-intervention status and B representing the post -intervention status Strengths:  Design is simple and practical Intervention is intended to increase positive outcome Intervention delivered and measured  Limitations: There are concerns about the validity of the findings, the validity of the measurement instrument, and consequently, the inability to present the effectiveness of the intervention with a high degree of confidence
Description: A target group is assessed prior to the intervention and after the intervention they are assessed again using the same measurement tool. It is designed to measure the change that was presumably caused by the intervention. Strengths:  Can show comparison between before and after the intervention Progress is likely attributable in part to the intervention  Limitations:  Threats to internal validity Historical considerations Maturation Testing and instrumentation
Description: The post-test only aspect of this design means that the impact of the intervention is only delivered after the intervention. The experience annd success of othe clients also served by the agency, who have not recieved the intervention is also measured. Strengths: Simplicity of the post-test only design combined with a simple, accessible method for comparison Limitations: Concerns abut the ability to compare nonequivalent groups and the lac k of randomization mean that strong questions about the validity persist.
Description: The persons to be studied are randomly assigned to two groups. one group is administered the intervention and the other group is not administered the intervention. The condition and status of both groups (e.g. experemental group and control) are measured. Strengths:  Allows ability to control threats to internal validity Presents a higher degree of confidence in the results of the evaluation and effectiveness of the intervention  Limitations:  The cost and effort to create this type of experimental design is higher than others Ethical concerns association with withholding treatment
Description: Control group not selected by randomly withholding the intervention Strengths:  May not present the dilemmas posed by an experimental design Is more compatible with ongoing service delivery Offers some degree of rigor as it attempts to answer the questions as to the effect of experiencing the benefits of the information  Limitations: Potentially challenging to identify comparison groups