Core Services

Program Evaluation

At Insight, we help our clients evaluate the effects and implications of their programs and policies. Our researchers analyze options that offer improved outcomes for programs, their stakeholders, and the public. Our evaluations help set priorities, avoid unintended consequences, and solve challenges; they are highly regarded for their analytic base, independence, and rigor. We have developed a multitude of reports to support clients in the fields of health, education, and social welfare.

  • Logic Models

    At the start of each project, we work closely with our clients to create or update a logic model—specifying how a program aims to achieve outcomes and defining the assumptions, goals, inputs, outputs, and expected outcomes.

  • Power Analysis

    When designing impact evaluations, we calculate the minimum number of cases needed for particular effect sizes, then work with clients to select the appropriate sample.

  • Formative Evaluations

    We often assess the feasibility of program components and even the feasibility of different types of evaluation. Clients use our feedback to monitor and adapt ongoing programs.

  • Process Evaluations

    We work with clients to help them understand how a program is being implemented across various sites or populations to facilitate understanding of what is working, for whom, and why.

  • Implementation Fidelity

    Implementation fidelity is the degree to which an intervention is delivered as planned. We develop customized measures that describe variation and facilitate understanding of differences in outcomes.

  • Randomized Controlled Trials

    Insight is experienced in designing, recruiting for, and implementing randomized controlled trials, or impact evaluations. Considered the gold standard in evaluation methodologies, randomly assigning subjects to treatment or control groups limits bias and creates valid estimates of program or policy impact.

  • Quasi-Experimental Design

    Randomly assigning treatment and control groups is not always feasible. In those cases, we work with clients to design alternatives—such as propensity score matching—to create treatment and control groups that still provide valid estimates.

  • Regression Discontinuity Design

    Our clients often need an impact evaluation performed after a program or policy has been implemented. When detailed pretest and posttest data are available—along with a clear cutoff point for assigning treatment (who received the program)—we use regression discontinuity designs.

Featured Projects


Insight conducted qualitative research to evaluate a pilot of the mobile health program Text4baby, which aims to assess the effect of the program on enrollment in Medicaid and the Children’s Health Insurance Program (known as CHIP), as well as health knowledge, health behaviors, and health care engagement. Insight conducted in-depth interviews and focus groups with key stakeholders and pilot participants to determine the effectiveness of the pilot program and the potential to implement mobile health messaging programs with similar populations.

Read more
Formative Evaluation of the Northeast Tennessee College and Career Readiness Consortium for the Niswonger Foundation

While implementing a 5-year Investing in Innovation validation grant, the consortium needed near real-time feedback on programs it was developing and piloting across different schools. Insight analyzed extant data on student course enrollment and completions to provide feedback on growth of and student success in Advanced Placement courses and developmental college math courses taken prior to high school graduation.

Read more
Regression Discontinuity Design Evaluation of the National Oceanic and Atmospheric Administration’s (NOAA) Office of Education Scholarship Programs

NOAA’s Office of Education wanted to know how scholarship activities have translated to measurable outcomes among participants. Insight designed an impact analysis comparing scholars’ relevant academic and career outcomes with those of similar students. A comparison of alumni outcomes to national statistics was not the best way to answer NOAA’s questions. Instead, we used a regression discontinuity design approach to determine program outcomes and impacts. This approach enabled us to compare outcomes for scholarship recipients with those for individuals who applied for—but did not receive—these awards.

Read more