tracking pixel
Institutional Research and Assessment


Annual Assessment Process

Academic programs are expected to document their assessment activities each year by participating in an annual assessment process.

An online template was developed in 2013 to assist programs in documenting their assessment in a way that meets institutional expectations.


The online template allows programs to document:

    1. Basic program information
        • Name of the department where the program is housed
        • Name of the major or degree program
        • Name of the Chair of the Department or Program Director
        • Name of an individual within the program who is willing to serve as the assessment contact
        • Date of the program's next EPC program review
        • Name of the program's external accrediting body, if applicable

    2. Program assessment plan
        • Student learning outcomes
        •  Assessment tools and methods used to assess each outcome
        •  Methods used to ensure the quality of assessment tools and methods used
        •  Identification of who will be assessed using each tool or method
        •  Logistics
        •  A schedule of when each assessment tool will be administered next
        •  (option) Criteria for determining if assessment results met faculty expectations

    3. Program curriculum map

    4. Results from program assessment activities

Evaluation of the Annual Assessment Process

The Assessment & Evaluation Committee reviews annual assessment forms throughout the academic year and provides feedback to faculty. To do this, a rubric was developed to document our institutional expectations for assessment in the following areas:

    1. The assessment model
    2. Student learning outcomes
    3. Number and type of assessment tools or methods used
    4. Quality of assessment tools and measures used
    5. The schedule of assessment
    6. Documented results of assessment activities

By the end of the academic year, the Assessment & Evaluation Committee summarizes their evaluations of the annual assessment forms and provides a "state of assessment report" to the Vice President of Academic & Student Affairs.

Assessment Expectations

Every degree or major program at St. Ambrose is expected to participate fully in the annual assessment process. This expectation is supported and enforced by the Educational Policies Committee during each program's annual review process.

While each academic program is free to choose the most appropriate, useful, and effective methods for assessing their student learning outcomes, the following expectations for assessment allow for an evaluation of our assessment activities.

Expectations for Assessment Models

All academic programs are expected to document assessment models that are logical, feasible, and will yield useful information. Assessment models should assess not only the level of mastery attained by students nearing the end of the program, but the growth in student performance throughout the program.

Assessment models should also assess the degree to which program activities (courses, faculty, student opportunities) contribute to student learning. One way of documenting this contribution is through the creation of a curriculum map. The minimum expectation is that programs display how each course in the program contributes to each student learning outcome in the program. Some programs develop more detailed curriculum maps that also show how courses contribute to the progression of student performance in each outcome. 

Assessment models are also expected to demonstrate how all faculty contribute to the assessment process.

Expectations for Student Learning Outcomes

For quite some time, all academic departments at St. Ambrose have been expected to have documented student learning outcomes. Departments were supported in meeting this expectation through assistance from the University Assessment Coordinator (in consultation or through workshops such as the 2006 workshop on developing high-quality outcomes).

In reviewing these outcomes, it became apparent that while departments had outcomes, not all academic programs had documented SLOs. Many departments documented a single set of outcomes even though the department may have housed multiple major or degree programs.

Beginning in 2013-14, the annual assessment process was updated to require high-quality SLOs for all major and degree programs. Student learning outcomes are high quality if they are:

    1. Clearly stated (not only understood by experts in the discipline)
    2. Student-focused (not stated in terms of what the course instructor attempts to do)
    3. Specific (not vague)
    4. Statements of knowledge, skills, and/or attitudes expected for students (not statements about processes)
    5. Appropriate for the level of the program (not too simple or complex for the undergraduate or graduate program)

Programs are encouraged to review SLOs developed by professional organizations or similar programs at other universities. To assist in determining if outcomes are appropriate for the level of the program, faculty are encouraged to consult the Degree Qualifications Profile developed by the Lumina Foundation and the Task-Oriented Question Construction Wheel Based on Bloom's Taxonomy.

Expectations for the Quantity, Quality, Type, and Frequency of Assessment

Because assessment instruments differ in quality and scope, a strict number of instruments needed to adequately assess program SLOs cannot be mandated across all academic programs. Programs are encouraged to assess each SLO using as many instruments as they need to confidently (reliably) make inferences about student achievement. At a minimum, programs are expected to assess each outcome using results from at least two instruments.

To ensure inferences made from assessment data are valid, programs are expected to work to document and evaluate the quality of the instruments they use to assess each SLO. This evaluation of instrument quality requires a great deal of time and resources. Therefore, whenever possible, information from test developers or external researchers would be sourced as evidence of assessment quality. When this information is not available (for internally developed assessments), programs should work to develop plans to collect evidence of the quality of their chosen assessment instruments.When using internally-developed measures, programs are expected to take some basic steps to ensure inferences made from these assessments are valid:

    1. Consult with other faculty within the program to ensure instruments align with the intended outcomes (each measure actually assesses something relevant to the outcome).
    2. When student performance is evaluated across different courses or instructors, faculty should work to locate or develop a common rubric to ensure consistency in ratings.
    3. When feasible, programs should use multiple faculty to evaluate (at least a sample of) student performance.
    4. When possible, programs should use an externally-benchmarked instrument.

Assessments are often classified into many different dichotomies (direct/indirect; formative/summative; objective/subjective; criterion-/norm-referenced; formal/informal; performance/written; standardized/classroom; selected-/constructed-response; internal/external), with claims made that certain types of assessment are inherently superior to other types. Programs are encouraged to remain flexible in choosing assessment procedures/instruments.

The following guidelines are intended to assist programs in choosing the types of assessment that best measure student performance:

    1. Assessment instruments with documented evidence of quality are preferred to instruments with little/no available evidence of quality.
    2. Externally-benchmarked assessments should be used whenever possible to allow comparisons of student performance to external norms or criteria.
    3. Programs are expected to assess each SLO using information from at least one direct measure of student performance. This information may be supplemented by indirect measures.

While indirect measures do not provide valid evidence that SLOs have been achieved, they do provide useful information regarding student perceptions, satisfaction, and engagement. This information is important to collect, analyze, and use, especially in regards to institutional student engagement goals.

Course grades typically represent many factors outside any one particular SLO. Because of this, course grades and student GPAs are not recommended as measures of student performance on programmatic SLOs. Programs may use course grades if they can document evidence that course grades do represent student performance on any particular SLO (and do not include many other irrelevant factors). This could be the case if a course uses standards based assessment and grading.

Most academic program SLOs are statements of expectations for students who complete the program. Therefore, assessing student learning outcomes once -- near the end of the program -- could be used to determine the level at which students attained each outcome.

Even though students may not be able to meet intended outcomes until graduation, it is important to continually monitor student progress. Therefore, programs are encouraged to assess student learning outcomes multiple times throughout a student's career. Programs could assess students at a baseline level (close to the start of the program), developmental level (at a midpoint of the program), and mastery level (close to program completion) to help gauge program effectiveness. Additionally, programs should strive to assess the satisfaction, performance, and status of their alumni.

Expectations for the Documentation of Assessment Results

Programs are encouraged to document and report assessment results in a format that best serves the needs of the program. At a minimum, programs are expected to report participation rates alongside the results. Programs should also provide a brief explanation of how assessment results compare to expectations of faculty in the program.

Programs are expected to report results from the assessment of at least one SLO every year. Over the course of five years, programs are expected to report results from the assessment of all their SLOs.