Assessment plans, defined by each academic department and administrative unit of the institution include:

Assessment plans may vary significantly in content and scope; however, in order to ensure that they can be easily shared amongst all stakeholders, in January 2015 we adopted a standard template for reporting assessment (restricted access) plans as well as assessment implementations.

Mission, Objectives, and Learning Outcomes 

Assessment of institutional effectiveness and student learning at all levels of the University is guided by clearly identified objectives and/or learning outcomes (LOs) aiming at implementing the mission of the unit.  This page explains how these are defined at different levels of the institution and how they interact.

The mission statement of a unit/department or program concisely describes its purpose and values and its relation to the institutional mission.

This figure shows how Objective and LOs are related at different levels.

         

Institutional-level objectives and LOs are derived directly from the University Mission and Strategic Plan and form the basis for their assessment.  At the unit/program level, Objectives and LOs are derived from the unit/program mission and from institutional level Objectives and LOs. This process continues similarly at lower levels. In the figure arrows pointing downward show that objectives and LOs at higher levels are used to define objectives and LOs at lower levels, i.e. assessment planning. Arrows pointing upward show the path of assessment execution. Planning and assessment, however, can be seen and implemented as a mixture of top-down and bottom-up processes.[1]

Note that for most institutions, assessment plans do not contain a “learning unit” level. A learning unit is seen here as a learning activity that may or may not be integrated in a course. At AUP learning units are often shared by several courses and may include, for example, study trips, class visits, seminars, and special projects.

Our current assessment process does not directly assess learning units except as part of a specific course. Because these types of activities are so fundamental to the AUP curriculum, one of the objectives of the forthcoming years is the definition of an appropriate assessment methodology for different types of learning units.
 

  • Institutional Objectives and Learning Outcomes

The definition of Objectives and Learning Outcomes at the institutional level is part of the Strategic Planning process undertaken regularly by the President, the Leadership Team and the Board in an interactive exchange with Faculty and Staff. This process is currently based on a set of key progress indicators followed at regular Board meetings.  It will be strengthened in the future through the definition and regular measurement of indicators directly related to the achievement of Institutional Objectives and Institutional Learning Outcomes.

The Strategic Plan 2015-2020 defines a new set of institutional level priorities:

5 year Institutional Level Priorities

  • Institutional Learning Outcomes 

5 year Institutional Mission

  • Objectives and Learning Outcomes

Each administrative unit and academic program defines respectively its own objectives and LOs . Note that the choice of using “Objectives” for administrative units and “Learning Outcomes” for academic programs stemmed from the need to align with previous practices at the University while also maintaining the assessment process simple, in the future we would like to make more explicit the contribution of all types of units to learning outcomes. Objectives and LOs must be measurable and aligned with those defined at the institutional level (meaning that lower-level objectives and LOs should contribute to the achievement of higher-levels ones) and must be agreed upon by all direct stakeholders. As part of the assessment-planning-implementation cycle, objectives and LOs may be reviewed (here are some recommendations on the definition of objectives and learning outcomes).

Measuring Achievement of Objectives and Learning Outcomes

In assessment plans, two types of measures, that we name immediate[2] measures and contiguity measures, evaluate the achievement of objectives and LOs. Immediate measures consider the achievement of an objective or LOs as an atomic item. For example, a French language program may expect its senior students to have a certain level of oral competency and establish an exit test to measure if that level is acquired. Contiguity measures, instead, are based on the assumption that the achievement of an objective or LO is dependent on the achievement of all the lower level objectives or LOs contributing to its realization; for example, in the case of the French language program, the achievement of oral competency would be demonstrated by showing that the LOs of the courses designed to achieve that competency are achieved. In general, both types of measures should be used and while immediate measures are a better “proof” of achievement, contiguity measures are better suited to provide an explanation of why a failure may occur. This is similar to summative versus formative evaluations; however, the summative/formative dichotomy emphasizes the timing of the evaluation (after performance and during performance respectively) while, in the case of immediate versus contiguity evaluations, the emphasis is on assessment of the whole versus assessment of the parts. Similarly the distinction between direct/indirect evaluations emphasizes the distinction between effective and perceived rather than the distinction between whole and parts.

Assessment methodologies for each one of the objectives and LOs of, respectively, administrative units and academic programs are specified in the “Objectives” and “Learning Outcomes” tables of the Assessment Plan Template (restricted access).

A further set of measures, currently not fully integrated into the assessment process, is related to environmental factors. Examples of environmental factors impacting student learning include, for example, the changing expectations of students, their evolving level of knowledge of certain subjects, their access to new modes of knowledge and communication. Examples of environmental factors impacting institutional effectiveness include, for example, the number and quality of institutions offering degrees similar to ours, the evolving availability of digital tools to support administrative tasks, etc. One of our objectives for the next five years is the gradual integration of environmental measures in the assessment process.

The Office of Assessment, Learning and Institutional Research provides support for assessment-related measurements. The objective is to move from an ad-hoc service to units and departments towards a more structured service based on customized online dashboards reporting real time (or regularly updated) information about the activities and status of units, departments, and the faculty, staff, and students taking part in them. The identification of relevant indicators is and will continue to be done in collaboration with the information stakeholders.

Institutional Performance Indicators

Several indicators are currently used to assess the achievement of institutional-level objectives. These include, for example:  ten-year comparative tables measuring admission of new students and enrollment, graduate and undergraduate acceptance and yield rates (total and by applicant category), entering students by applicant category (degree seeking, visitors, graduate/undergraduate, transfer), enrollment by category, retention rates, FIT analysis, student nationality analysis (including representation of US citizens), etc. Several surveys also contribute to the analysis of institutional performance. These include student surveys (e.g. satisfaction survey, incoming class survey, advising survey, exit survey), alumni destination surveys, periodic faculty and staff satisfaction surveys, and periodic Board surveys. All these are available (mostly to AUP constituencies on the Institutional Research pages of this site).

One of the main objectives for future development is the strengthening of the assessment input into the Strategic Planning process through the identification and regular provision of key performance indicators enabling the immediate and contiguity measurement of Institutional Objective and Learning Outcomes achievement.

Measuring the Achievement of Objectives and Learning Outcomes

Each university unit defines the most appropriate measures and target results for its own objectives and LOs.  Direct evidence – i.e. evidence that looks at products such as student work or services provided - should be privileged; however, indirect evidence – i.e. evidence of how something is perceived or received - may also be included and may provide precious information about how students (or other stakeholders) perceive the level of success of the unit or program. Both immediate and contiguity measures contribute to the analysis. Any instrument supporting measurement methodologies used by the units or programs - such as scoring rubrics, instructions for portfolio creations, qualifying or comprehensive examinations – should be described or included in the assessment plan. See more details and recommendations on measurements of objectives and learning outcomes

[1] For contingent reasons, for example, a department (in agreement with the Provost) may decide to offer a course that is not aligned with a program’s LOs, and subsequently the success of the course may induce a change in the program LOs.   Further, interaction also happens across levels; administrative units, for example, often directly derive their objectives from the need for supporting the specific needs of academic programs (horizontal arrows).

[2] The word “direct” here would have probably been more appropriate but “immediate” is used to avoid confusion with direct/indirect evidence as defined above and widely used in the assessment literature