Evaluating outcomes demonstrates value of education

CLICK to Email
CLICK for Print Version

By Barbara A. Brunt, MA, MN, RN-BC, NE-BC

Editor’s note: Brunt is director of nursing education and staff development at Summa Health System, Akron, OH.

After reading this article, you will be able to:

  • Describe strategies to evaluate program outcomes

Nursing professional development is a specialized nursing practice that promotes the professional development of nurses by facilitating their participation in lifelong learning activities. Such activities are designed to enhance professional competence and role performance, with the ultimate outcomes of protecting the public and providing safe, quality care (American Nurses Association [ANA], 2010).

Outcome evaluation is very important in today’s environment, as both consumers and regulatory agencies are demanding accountability and documentation of the results of educational endeavors.

In evaluating our work, staff development personnel frequently use Kirkpatrick’s (1996) first two levels of evaluation, which are reaction (happiness index) and learning. Kirkpatrick’s third and fourth levels, behavior change and results (program impact), are more difficult and time-consuming to implement, but they are worth the effort. Return on investment (ROI) is often considered the fifth level of evaluation. This article will focus on some strategies to use for the higher levels of evaluation, including outcomes. (Read more on Kirkpatrick’s Four-Level Training Evaluation Model.)

There are two standards in the newly revised Nursing Professional Development Standards of Practice (ANA, 2010) that deal directly with outcomes. Standard 3, outcomes identification, states: “The nursing professional development specialist identifies desired outcomes” (p. 25), and standard 6, evaluation, states: “The nursing professional development specialist evaluates progress toward attainment of outcomes” (p. 31).

We must show that our educational programs make a difference in practice, but how do we do that?

Tie into the organization’s strategic initiatives and goals

Since you cannot use level 3 and 4 evaluation strategies with every program, look at what your organization is working on and plan educational activities to help meet those goals. As you are developing programs, plan for some follow-up to see whether there is a change in behavior or outcomes. You want to show the value of staff development in helping achieve the organization’s goals. You will get more bang for your buck if you focus your outcome evaluation on activities that support the strategic plan.

Involve learners and key stakeholders in formulating desired outcomes

Involving learners and stakeholders helps promote buy-in, and you can measure your success against expectations. Learners and stakeholders can brainstorm potential organizational outcomes of the learning activity, which clearly links the education with the outcome the learners are trying to achieve. The staff development specialist can focus on outcomes that are relevant for the learners and relatively easy to measure.

For example, if one of the organization’s goals is for managers to conduct more effective meetings, the first step in the process would be to conduct a 15- to 20-minute interview with a key executive, often the administrator who is financially accountable for the project, prior to the beginning of training. Based on that person’s expectations (e.g., that employees “waste less time and have fewer meetings”), specific learning objectives and outcomes are established.

Outcomes could include reducing meeting length or frequency, providing better follow-up from meetings, ensuring that appropriate participants attend, and using a decision matrix to ensure better decisions and stronger commitment. Content would include strategies for running effective meetings, as well as practice preparing and evaluating results-oriented and measurable objectives for a meeting. Once the training is complete, the same person would be interviewed again and asked to quantify the results of the training. This can then be used as reasonable evidence of success, in addition to hard data quantifying the length and frequency of meetings and projected cost savings.

If another group is already collecting data, take advantage

One indicator from the National Database of Nursing Quality Indicators is falls, so many organizations collect data on falls and falls with injury. If you provide a series of continuing education programs on mobility and strategies to decrease falls, you can compare data before and after the classes to see whether there is a reduction in falls. The organization can provide estimates of the additional costs incurred related to falls, which can be quantified into cost savings. (Read this article on falls among older adults.)

Look for research or clinical ladder programs and have them help collect outcome data

Clinical ladder programs may include a care outcomes project for a specific unit. Nurses would assess or evaluate a clinical issue or activity based on department goals and develop an action plan for improvement. They would then use the plan-do-check-act methodology to determine success in the implementation of the action steps. Clinical ladder programs can also include research projects, or individuals may conduct research studies in an area of interest. If there are staff development educators interested in research, they can develop and implement outcome evaluation studies.

One research study by Wong, Chan, and Chair (2010) reported on the effectiveness of a pain management educational intervention on level of pain, anxiety, and self-efficacy among patients with musculoskeletal trauma and consequently orthopedic surgery. A pre- and posttest quasi-experimental design was employed with 125 patients assigned to a control group (usual care) or experimental group (usual care plus educational intervention). The 30-minute educational intervention consisted of information on pain, coping strategies, and breathing relaxation exercises. Outcome measures were scores for pain, anxiety, self-efficacy, analgesic use, and length of hospital stay. These were measured before surgery, on day two, day four, day seven, one month, and three months after surgery. The experimental group reported statistically significant lower levels of pain, less anxiety, and better self-efficacy during hospitalization and at the three-month evaluation reported statistically significant effects on anxiety level.

Ask participants what they will do differently as a result of the program

Although these are self-reported data, you can ask participants about one thing they will do differently as a result of the class on the evaluation they complete immediately after the session. You can then send out a survey to the participants in 60–90 days asking them what they incorporated into their practice. Results of the evaluations submitted right after the session and the post-class follow-up can be compared to determine whether they incorporated that information into their practice.

Use multiple data sources to document transfer of learning to practice and program impact

In addition to a questionnaire after the program, you can use information from 360-degree surveys or evaluations, interviews, documentation audits, tracking measures, and observation to document transfer of learning. For example, feedback could be solicited from managers of behavior changes seen in staff on the unit. Increases in quality, productivity, and patient satisfaction, as well as decreased turnout and length of stay, may be used to document program impact. For example, projected savings from a series of initiatives included in an educational program on delirium were calculated based on evidence of earlier detection and treatment of delirium, resulting in decreased length of stay. (Learn more about delirium by clicking here.)

By documenting outcomes, including those that demonstrate learning and program impact, we can show our value to the organization and impact on quality of care.


American Nurses Association. (2010). Nursing Professional Development: Scope and Standards of Practice. Silver Spring, MD:

Kirkpatrick, D.L. (1996). “Revisiting Kirkpatrick’s four level model.” Training and Development Journal 32(9): 6–9.

Wong, E.M., Chan, S.W., & Chair, S.Y. (2010). “Effectiveness of an educational intervention on levels of pain, anxiety and self-efficacy for patients with musculoskeletal trauma.” Journal of Advanced Nursing 66(5): 20–31.