| |
Tailoring Evaluations |
|
Topics |
|
Who
does evaluations? |
|
Who is
your audience? |
|
Working with clients |
|
What
is tailored? |
|
Formative vs. summative evaluations |
|
Scientific vs. practical considerations |
|
Managing evaluation projects |
|
Detecting Cheaters and other unintended consequences |
|
|
Readings & Handouts |
|
Rossi,
et al. Chapters 2 & 3 |
|
Wholey,
et al. Chapters 1, 2, 4, 9, 10, 23-28 |
|
Reserve #2, #9, #10 |
|
Report - Program Evaluation: Experienced Agencies Follow a Similar
Model For Prioritizing Research Needs - GAO-11-176 |
|
Program
evaluation can be a useful tool for policy analysts and program managers
because it can help shed light on the relative effectiveness of
different policy options. For example,
education
reform proposals (2002) often focus on proposals for smaller
classroom sizes, single-sex classrooms, and uniforms. But how
effective are these interventions? Evaluations can often provide
this type of information. |
|
Read this essay about
why science matters and why it is a way of life from the NY Times
(6/1/08). In many ways, a goal of this class is to get you to
employ more of a scientific perspective as a manager and analyst.
|
|
At the
same time, you have to be a critical consumer of "science" and
information. See this interesting
commentary from the Washington Times about claims being made
during the health reform debate (2009. This set of
handouts also
provides useful insight on the importance of having the right
perspective when it comes to evaluating risk, data, and things
purported to be fact. |
| Policy debates often involve "experts". But what is
an expert? See this
interesting article from the USA Today (7/18/05). So how do
you know who to trust and what information to believe? Take
Dr. Dwight Lundell. He has been
arguing for years that fat and cholesterol are not the main cause of
heart disease and that it is inflammation in the artery wall
that is the main cause. More importantly, expensive statin
medications used to lower cholesterol may actually be exacerbating
problems with arterial disease. He is a heart surgeon with 25 years
of experience and 5,000 open heart surgeries. Is he an expert?
He certainly has sold a lot of books and a quick Google search
produces lots of instances where he is referred to as such. At
the same time, he
lost his medical license in 2008 and there are lots of
other reasons
to suggest he is a quack. |
|
It is
also important for evaluators to use numbers whenever possible.
The problem of course is that people often have trouble
understanding the meaning of numbers. Watch this short clip
from David Letterman for some examples of the problems people often
have in terms of interpreting numbers:
|
| Another example of the influence that phony
"experts" can have on society is the argument that MMR vaccine can
cause autism. Is autism caused by the MMR vaccine?
While the answer is no, that has not stopped "experts" from claiming
otherwise.
|
The evidence first suggesting the link was published by Dr.
Wakefield and his colleagues (1998) in
The Lancet and was subsequently retracted in 2004.
|
| Feeding the controversy are events like a
2010 court
award to Hannah Poling for more than $1.5 million in a
vaccine-autism claim continuing to foster beliefs that there
is a government cover-up of the link between MMR and autism (CBS
News 9/9/10). \ |
| The federal government's vaccine injury
compensation program has also made numerous awards for
vaccine-induced brain injuries, which mimic symptom's of autism
feeding fears. See this
controversial
evaluation of the VICP by Hollan, et al. published in the Pace
Environmental Law Review in 2011. However, these
payments are not for vaccines causing autism and two of the
authors represent clients with claims on behalf of family
members in the VICP. See this
interesting critique of the study in wired.com pointing out some
of the flaws in terms of their analysis.
|
| However, all of the
best
scientific evidence suggests that there is no link at all.
Ironically, evidence for the lack of a link is associated with
the large number of people who stopped using the MMR vaccine
after the media reports surrounding Wakefield's study published
in the Lancet in 1998. This should have been correlated
with a measurable decrease in autism rates but there is no
evidence that these changes occurred. |
| Moreover, despite the claims paid out by
the VICP, in
March 2010 a federal court ruled after 600 pages of findings
that the families of children diagnosed with autism are not
entitled to compensation because there is insufficient evidence
of a link between MMR and autism. |
| So why the increase in autism rates?
In all likelihood it is due to changes in the definition of
autism and better diagnostic tools that find more cases.
For example, if there were really an increase you would see
higher rates in children than in adults but as a recent study by
Brugha, et al. published in
the Archives of General Psychiatry (5/12) from the UK
indicates, that isn't the case. It may also be due to
government services other policies that create incentives for
more cases to be diagnosed. One new theory is even that it
may be
environmental and related to chemical exposure (CNN 6/11)? |
| The one thing we do know that
if you weight the relative risks according to the CDC, it is
safe to vaccinate your children and failing to do so places them
and others at greater risk of illness or even death. |
|
|
As a
result of this fear over the erroneous reports of the link between the
MMR vaccine and autism, large numbers of people have chosen not to
vaccinate their children. This has led to
measles outbreaks in the
U.S. and the U.K.
|
|
Being an effective program evaluator required not only thinking
about programs and their associated outputs and outcomes in a critical
manner, it also required being open to the sometimes counterintuitive
findings that present themselves. For example, see this
article in the
MailOnline that describes the results of a study on the importance
of eating more vegetables, something we all simply assume is
true. But is it? See this
rebuttal of the study's conclusions from January '11. Is it actually safer to be modestly overweight than
underweight? See this report in the
JAMA on Obesity
and Death (2005) and a related article about
CDC linking risk of
death to obesity (2005). |
|
When
in doubt, trust your gut instincts. Sometimes it is the best
strategy for making decisions. See this interesting
article from the U.K. Mail Online (8/11) |
|
While
program evaluation is a useful analytical tool that can be used to
help shape policies and improve program administration, it can be
very resource intensive. In some cases, it might even cost
more to evaluate a program than is currently being spent on the
program. Thus, when tailoring an evaluation some consider
should be given to whether the potential value of the information
obtained from an evaluation is worth the cost. For example,
was it really worth the cost of the
LA
Nutrition Study Evaluation (Newsweek - 8/09) to find out that
kids like the taste of fruit? |
|
It is
also important to remember that evaluation and program monitoring
can create incentives to cheat. This is particularly evident
in education reform efforts. See this
USA Today investigative report: Testing
the System (Mar. '11), this set of
articles on a widespread
scandal in Atlanta as examples (7/11), and this story on
problems in South Carolina (2008). |
|
It is
also important to remember that an evaluation is just one type of
information used by policymakers and sometimes programs are
effective and are rejected. For example, see this interesting
article about the
politics surrounding red light cameras at intersections (MSNBC
6/24/11). In other instances, evaluations produce information
that suggests programs are relatively ineffective but politicians
choose to keep funding them. For example, D.A.R.E. is a
relatively popular program with the public and politicians despite
the fact that there is little evidence that suggests the program is
effective in achieving its desired outcomes (See
RES #9 and
RES
#10). |
|
|
Lecture
Notes |
|
|
Web
Resources |
Professional Societies for Evaluators |
|
|
Organizations and Think Tanks Involved in Evaluation
Research |
|
Mathematical Policy Research, Inc. |
| Rand Corporation |
| Urban Institute |
| Brookings Institution |
| American Enterprise Institute |
| Heritage
Foundation |
| Hudson
Institute |
| Hoover
Institution |
| CATO Institute |
|
Resources for the
Future |
|
John Locke
Foundation |
|
National
League of Cities |
|
National Association
of Counties (NaCo) |
|
National Governor's
Association |
| ABT
Associates |
| MDRC
(nonprofit education and social policy research organization) |
| Pew Charitable Trusts |
|
United Way of America, Outcome Measurement
Resource Network |
| U.S. Government Accounting Office |
| U.S.
Department of Health and Human Services, Agency for Healthcare
Research and Quality |
|
DOE, Energy, Efficiency, and Renewable Energy, Program Evaluation |
|
DOJ,
Bureau of Justice Assistance, Center for Program Evaluation |
|
Center for Disease
Control, Evaluation Working Group |
|
UCLA
Center for Health Policy Research |
|
|
Evaluation Journals |
| American Journal of Evaluation |
| New Directions for Program Evaluation |
| Evaluation Review (formerly Evaluation
Quarterly) |
| Evaluation Studies Review Annual |
| Evaluation Practice, Evaluation, and Program
Planning |
| Journal of Public Policy Analysis and
Management |
| Evaluation in Education |
| Evaluation and Human Services |
| Evaluation and the Health Professions |
|
|
General Sources for Evaluation Studies |
|
|