An eLearning Program Evaluation Plan. Sounds like normal edu-speak, right? I know I’ve heard similar phrases for years. However, I’ve never been tasked to contribute to an evaluation plan, much less create one. Well, that is until yesterday. As part of our agency initiative, we promised the TAMUS Chancellor that we’d have an evaluation plan developed by 02/01/08. Oops! Our big boss was made aware of the oversight, who then notified my boss. And of course, yours truly spent the day avoiding a panic attack and researching. My favorite conversation of the day…

Me: Ok, so I’m looking over the objective and performance measure we have for the evaluation plan and I want to see if I’m even close to the ballpark on this one.
Mgmt: Well, the first measure is specific data.
Me: So, the reports I’ll generate after courses that contain participant feedback?
Mgmt: No. That’s too detailed.
Me: Ok, so broaden the scope and back-up.
Mgmt: Yes, work from the bottom up; what will the annual report look like, what process will be used to analyze data, where is the data coming from, what data are we going to look at.
Me: That’s where I’m looking for guidance. What is it you need to see?
Mgmt: That’s for you to determine. First identify the what.
Me walking away and thinking to myself: Duh, I got that part. But my psychic powers are apparently failing me. I’m not the stakeholder in this. I don’t know what you want to see if you don’t tell me.

And so my afternoon of typing, deleting, typing, deleting, and typing again began. I would get two or three pages in to an outline of thoughts and questions and think, “no, I’m not even in the parking lot of the ballpark, much less left field.” *sigh* It was a tad frustrating. I started out on elearnspace’s Program Evaluation page. I gotta admit, the resources they provide are what got me through the day with a little sanity intact. The very last link appears to be a quite outdated page by Dr. Thomas Cyrs. However, it proved to be most useful in getting the ball rolling for me. His questions are what really helped me understand where I was and where I was trying to go…

  1. What is the purpose of the evaluation?
    1. Who are the stakeholders that need to know the outcomes of the program?
    2. What needs to be known? What is the purpose of the evaluative data?
    3. Why do these stakeholder need to know?
    4. When do the stakeholders need to know?
    5. How should the data be presented?
    6. Is the evaluation design empirical or anecdotal?
    7. How often do the stakeholders need the data?
    8. How will the data be used?
  2. What needs to be evaluated?
    1. Agency capability to develop and distribute quality eLearning
    2. Division eLearning strategies
    3. Agency resources to support eLearning curriculum development and delivery
      1. Personnel resources
      2. IT infrastructure
    4. Curriculum Development Model
      1. Course development time
      2. Course revision time
    5. eLearning authoring tools available to divisions
    6. Number of eLearning courses distributed by divisions
    7. Number of participants taking eLearning courses distributed by divisions
      1. Participant satisfaction
      2. Participant pass/fail rate
    8. Division marketing of eLearning programs and curriculum
    9. HR use of agency LMS
  3. What sources of information will be used for the evaluation?
    1. Interviews of stakeholders
    2. Project timelines
    3. LMS reports
      1. Number of courses
      2. Number of participants
      3. Exam results
      4. Participant feedback surveys
  4. What form of evaluation will be used?
    1. A summative evaluation will examine:
      1. Stakeholder attitudes
      2. Empirical and anecdotal data from timelines and reports
      3. Expected outcomes
        1. Strengths and weaknesses
        2. Proposed changes
        3. Benefits
  5. What will the Annual Evaluation Report look like?
    1. Four major sections
      1. Background
        1. Executive summary
        2. Purpose of the report
        3. Program background
        4. Major milestones
      2. Evaluation Analysis
        1. Strengths
        2. Weaknesses
        3. Opportunities
        4. Threats
      3. Recommendations
        1. Priorities for improvement
        2. Necessary resources
        3. Necessary actions
      4. Appendices
        1. Interviews
        2. Reports
        3. Timelines

Of course, that was all yesterday. And here it is 23 hours after submitting it for review, and I’ve heard nothing. Don’t you love putting out what appear to be imaginary fires? I should’ve brought some marshmallows and graham crackers…