Evaluation

*Please view our new series of blogs which outline the stages of evaluation outlined in the OFS Evaluation Framework*

Principles of Evaluation

Evaluative principles can be applied to many areas of higher education – consider the evidence needed for the continuous improvement of teaching quality, Teaching Excellence Framework submissions and Research Excellence Framework impact case studies.

 

Parsons (2017) has identified 5 key steps for effective evaluation research design which STEER promote for all evaluative work.  These are:

  • Evaluation research is not incremental. It requires clarity of expectations and needs before all else
  • One size does not fit all. Good evaluation research design is always customised to some extent
  • Effective design is not just about methods. It needs to combine technical choices (contextualised, fit for purpose, robust) with political context (so it is understood, credible and practical)
  • Method choices are led by a primary dichotomy: consideration of measurement (how much) vs. understanding (how; why). The chosen method can do both, if required
  • Good design is always proportionate to needs, circumstance and resources

Evaluation should always begin at the point of activity design and run through the life of a project:

Activity/Intervention (Programme) Design: evidence informed approaches to activity design include rationale; measurement; definitions of success; logic chains/outcomes focused evidence; and critical review.

Evaluation Design: the design of the evaluation should be based on how you evidence impact – consider the OFS Standards of Evaluation below to see what type of evaluation is the most appropriate for your context.

Evaluation Implementation: the practicalities of data collection include a critical appraisal of the types of methods used, resourcing and risk management which will ensure that any evaluation is designed proportionately and effectively.

Evaluation Learning: interpreting results and making the best use of the evaluation findings includes writing for different audiences and the benefits and challenges of sharing results internally with institutional colleagues and leaders, and externally with the higher education sector.


Building an Evaluative Mind-set

Sheffield Hallam is committed to continuous enhancement and as such, includes a responsibility for building an evaluative mind-set across the institution.  This will ensure that all activity and intervention is evaluated (process, impact or economics) and this supports evidence informed decision making.

Specifically for under represented groups, our Access and Participation Plan 19-20 outlines STEER’s input including learning from the Class of 2020 longitudinal initiative as an effective process for engaging with increasingly diverse student cohorts and escalating innovative methodologies that examine differential outcomes, for example, through the expansion of our Digital Storytelling and Listening Rooms initiatives to engage hard to reach students and capture student voices, especially those which seek to challenge the norm or prevailing culture.

The Access and Participation Plan 20-24 has a more detailed and robust evaluation strategy.  STEER’s input will include

  • Continuing to support staff working on strategic measure programmes with iterative and systematic evidence-informed programme design and implementation.
  • Developing training and development sessions for staff to develop capacity and capability concerning methodology and examples of ‘what works’.
  • Designing and implementing a new process of ethical approval to allow institutional service evaluations to be shared externally and inform ‘what works’ evidence bases.
  • Creating and piloting an evaluation repository to communicate evaluative outputs and raise awareness of evaluative practices.

STEER will be involved in meeting these objectives and applying the Standards of Evidence now adopted by the Office for Students.

Find out more in our blog series titled ‘Building an Evaluative Mindset’ at Hallam:

#1 Our Approach 

#2 Our Strategic Context

#3 Programme (Activity/Intervention Design)


Office for Students Recommendations for Evaluation in HE

Designing and evaluating effective practices is a key considerations for Access and Participation work, and has resonance across our institution.

There are three levels of evaluation evidence that represent the standards outlined by OFS.  These are as follows.

 

Narrative Evaluation: The impact evaluation provides a narrative or a coherent theory of change to motivate its selection of activities in the context of a coherent strategy.

Empirical Evaluation: The impact evaluation collects data on impact and reports evidence that those receiving an intervention have better outcomes, though does not establish any direct causal effect.

Causal Evaluation: The impact evaluation methodology provides evidence of a causal effect of an intervention.

The Office for Students have produced guidance documents for collecting evidence of impact.  These relate to Outreach, Access and Participation and Financial Support.

This guidance is based on work by the Centre for Social Mobility, and is also complimented by TASO, which has been created as a national body for supporting evidence and impact.

A STEER produced checklist for working through the OFS Recommended practice can be found here:STEER Evaluation Checklist


Evaluation Methods in Higher Education

STEER are currently developing an Evaluation Guide to support the design of effective evaluation across all areas of SHU.

In the interim, we strongly recommend the Research Leaders Impact Toolkit published by the Leadership Foundation in Higher Education.  This Toolkit features a publication on ‘Evaluating’.

The Research Leader’s Impact Toolkit resource and the supporting publications are for Leadership Foundation members only. Please click here for more details.

We also recommend the work of David Parsons, especially his most recent work ‘Demystifying Evaluation’.

You can also access STEER’s free webinar resources on optimising the use of evidence in higher education here.

The Scottish Framework for Fair Access has also produced Evaluation Guidance which is a useful provision of evaluation resources aimed at raising awareness of the importance of high quality evaluation.


Research and Evaluation Ethics

It is important that evaluations are designed and conducted ethically and all evaluators are clear about consent; transparency; right to withdraw; incentives; harm arising from participation; privacy and data storage; and disclosure.  This is particularly important if you are working with students as participants or co-evaluators.

STEER can offer ethical guidance and support on aspects of evaluation ethics within the institution.

Ethical Resources

@EthicsIR is a Hallam Guild Group which also aims to increase awareness and understanding of ethical principles involved in designing and conducting institutional research/evaluation at Hallam.  They have useful resources for those thinking about evaluation ethics, including a useful booklet from their recent Un-Conference: ETHICS IR BOOKLET (Draft Five)

There are also further resources below and via the Hallam ethics pages:

You may also find it useful to consult the British Educational Research Association ethical guidance.

Ethical Approval

The processes for research ethical approval at Hallam can be found here.

For more details on the distinction between institutional research and evaluation within higher education you can read Austen (2018).

STEER have designed a new ethical approval process for institutional service evaluations.  If you are conducting strategic evaluations which are a) employed to explore how well the institution is performing against its intended aims and b) intending to generate data which will inform local decision making, then STEER can support a streamlined approval process.  More information can be found here: Category Approvals for Research Ethics Review.

The application form for Category Approval of Institutional Service Evaluations can be found here:Category Approvals for Research Ethics Review_Form


 

benchmarkBenchmarking Tools

The following links provide resources for those interested in benchmarking practice.  Benchmarking tools help you map current practice and develop aspirational practice.  You may want to consider which evaluation methods are needed to measure current practice and indicators of success in realising aspirations.

 

 


 

magnifyTypes of Evidence

The generation of an evidence informed rationale for an activity (event, initiative, intervention) is the first stage of effective evaluation.

 

The office for students has an A-Z of effective practice in access and participation which has begun to list contextual information.

If you are designing your own data collection methods, review the following Guides and examples produced by STEER:

In addition to primary data collection, consider the wide range of existing evidence which could be used to evaluate projects, initiatives or practices in HE. Austen (2018) provides and overview of some of the sources of evidence within an institution.

 

The TSEP Student Engagement Evaluation Framework (pg. 32-34) also provides a useful table of sources of evidence.

QAA Scotland have also published a useful set of resources and a guide for measuring intangible aspects of student experience.

 


 

bulbSHU Examples

 

STEER are involved in various evaluations across the institution and externally including within widening participation and student outcomes.

You can read our thoughts here: Impact should matter to everyone

 

Pervious/current evaluations at Hallam include: pedagogical approaches (e.g. the impact of pass/fail approaches to level 4 module assessments); methodological approaches to capturing student voices (e.g. the effectiveness of the ‘listening room’ methodology); strategic ‘proof of concepts’ (e.g. Studiosity use)  and institution wide initiatives (e.g. the impact of the GoGlobal initiative).

External work includes:

‘Evaluation of the National Teaching Excellence Scheme’, funded by the Office for Students.

Evaluation of the National Mixed Methods Learning Gain Project, funded by the Office for Students.

 


 

Support question

STEER are currently designing, leading and supporting evaluation activity across SHU including work within Colleges, Professional Services and initiatives aligned to SHU Strategic Priorities.

For more information about designing or conducting institutional evaluations please contact steer@shu.ac.uk

 

 

 

 

Images courtesy of PicserverPixabayMax Pixel, Pixabay, Pixabay, Pixabay, Max Pixel,