Skip to Content

Outcomes Analysis for Evaluation

This tipsheet by Paul Chandler discusses outcomes analysis for evaluation with brief sections on impact, demographics vs. outcomes, storing data, measurement tools, intervals, tracking, and limitations.

Original Publication Date: October 3, 2013
Last Updated: February 19, 2023
Estimated Read Time: 5 minutes

Impact

Your programs have a positive impact on your clients, improving their lives in the short-term and long-term. Outcomes analysis helps you demonstrate the impact your program has on your clients.  “Outcomes” are the measureable change over time in the lives of our clients. Demonstrating outcomes is generally a part of program evaluation, but it can also be conducted as a distinct process.  The results can be used in almost any report or presentation.

To demonstrate outcomes for our clients, in the short term, we can measure immediate, short-term changes in a client’s knowledge, understanding, or behavior.   We can also measure long-term changes, what are called “situational outcomes” – for example, how their symptoms have decreased, or how their level of functioning has increased. We can also show outcomes for our overall organization, such as a change in the number or type of services we deliver, or the number of trainings we provide to our community of practice.

Demographics vs. Outcomes

Demographic fields are useful to describe your client base. You can also describe your activities, such as the number of services and types of services you provide.  These measures of activities are generally called “outputs”.   Once you collect data on outputs, you can explore correlations between them.  Demographics, correlations, and outputs, however are not “outcomes”.  The key quality of outcomes is change over time.  Over the course of time, your intervention has an impact on the lives of your clients.  By measuring this impact, you can refine the support you provide clients, and you can present the success of your program in a concrete, quantifiable way.

For example, if you summarize Harvard Trauma Questionnaire (HTQ-30) scores indicating PTSD symptoms in your clients, those scores provide a useful descriptive statistic.  If you go one step further, and record HTQ-30 at two or more points in time, you can use this data to show change over time.   You can show a client’s HTQ-30 score at the time of their intake and at a follow-up point. This process is referred to as “pre and post intervention.”   This process allows us to show the change our clients experience over time.

Storing Data

In his webinar, Demonstrating Client Improvement to Yourself and Others: Setting up an Evaluation System to Succeed, Greg Vinson shows how to set up a database to record measures.  Ideally someone within your organization within your own organization will set up this database, which can be simply an Excel spreadsheet.  The system may be very modest, but by creating it internally you are able to maintain it internally.  Some organizations will bring in a consultant or volunteer to set up a system, but once the developer leaves, the people in the office do not feel comfortable maintaining (or even using) the system.  It is best to start with something very simple that you can develop and manage by yourselves.  As you gain experience, you can expand the system.  You can create an Excel spreadsheet to record your measures. Data can be analyzed within Excel or imported into other statistical packages.

Start with one construct you want to measure.  Many offices focus on PTSD, for example.  You can measure psychological constructs, such as levels of depression or PTSD, or categorical changes, such as improvement in functioning, stability in housing, level of employment, and activity in social settings.

Choosing a Measurement Tool

Once you choose an item, choose a measurement tool which has been validated for your population or tool that is well established within the community of practice.  For example, the Harvard Trauma Questionnaire (HTQ-30) has been validated for use with torture survivors and refugees who have experienced trauma.

In his webinar Outcome Evaluation for Torture Treatment Centers: Concepts and Strategies, Ken Miller examines a wide range of measuring tools for therapeutic settings. He discusses how to proceed when a measurement tool has not been validated for your own population.  If you need to develop a questionnaire to fit the specific experience of your population, you can begin by collecting narratives of individual experiences and focus groups. The indicators described by participants in these informal settings can then be adapted into a questionnaire.  Ken Miller show how this methodology was used to assess mental health in post-war Afghanistan, in the  development of the Afghan Symptom Checklist (ASCL). 

If you use a new scale, you should consider simultaneously using established scales (such as the HSCL, HTQ, Beck Depression scale) to collect data for the same measure with two or more scales.  In this way you can show that the measures you collect with your new scale and measures you collect with established scales yield comparable results. 

Setting Intervals

Once you choose a construct and a measurement tool, plan a way that you can record two measurements:  for example, to collect one measure at the start of a client’s services, and a second measure at some later point in time.  Any interval is ok, three-months, six-months, or whatever you want:  choose an interval that fits your normal course of treatment.  Choose an interval that already fits the existing process of your Center. Once you choose an interval, adhere to those intervals on a regular basis, to collect data, so you can develop consistent, comparable records for all clients.

Tracking

You will also need a system to track the administration of forms and measures to clients.  Depending on the size of your client base, this tracking may also be a spreadsheet or small database.   Developing a tracking system is not a trivial step – to collect data at regular intervals, you will need a systematic approach, and it must be reviewed on a regular basis.  Consider using a calendar feature with reminders, using the scheduling features within the calendar program.  You can also use a spreadsheet to list clients, their start date, and the anticipated dates to administer certain measures.   Your system should identify when a measure is overdue, and prompt you to initiate contact with both the clinician and their client.

Limitations

We need to recognize and acknowledge the limitations of our methods and measures. If you summarize your findings, you need to include a discussion of the limitations of the measurement tool and the measure itself. For if you study measures of the Global Assessment of Functioning (GAF), your findings should note that it is highly subjective.  In addition, we need to include the effects of the person recording the scores – any possible bias caused by the “rater”.  If you do not see significant differences in measurements taken over time, you may need to increase your sample size or check for other factors affecting your results.

For each outcomes evaluation, we need to review the validity and reliability of our data.  Many resources explain the need for this process as well as steps to conduct the process itself. Simple methods within Excel and SPSS allow you to evaluate your data’s validity and reliability.  See examples within Greg Vinson’s webinar Demonstrating Client Improvement to Yourself and Others: Making Sure Client Numbers Reflect Client Reality as well as resources within the list of Tools and Links.

As you conduct these steps, you will soon have clear information about the impact your program has on your clients.   This information will inspire your staff and help you communicate your results with your colleagues.

Outcomes_Analysis_for_Evaluation_TipSheetDownload

Additional Resources