Evaluation

Principles of Evaluation

Evaluative principles can be applied to many areas of higher education – consider the evidence needed for the continuous improvement of teaching quality, Teaching Excellence Framework submissions and Research Excellence Framework impact case studies.

Parsons (2017) has identified 5 key steps for effective evaluation research design which STEER promote for all evaluative work.  These are:

  • Evaluation research is not incremental. It requires clarity of expectations and needs before all else
  • One size does not fit all. Good evaluation research design is always customised to some extent
  • Effective design is not just about methods. It needs to combine technical choices (contextualised, fit for purpose, robust) with political context (so it is understood, credible and practical)
  • Method choices are led by a primary dichotomy: consideration of measurement (how much) vs. understanding (how; why). The chosen method can do both, if required
  • Good design is always proportionate to needs, circumstance and resources

Evaluation should always begin at the point of activity design and run through the life of a project:

Activity Design: evidence informed approaches to activity design include rationale; measurement; definitions of success; logic chains/outcomes focused evidence; and critical review.

Evaluation Design: the design of the evaluation should be based on how you evidence impact – consider the OFS Standards of Evaluation below to see what type of evaluation is the most appropriate for your context.

Evaluation Implementation: the practicalities of data collection include a critical appraisal of the types of methods used, resourcing and risk management which will ensure that any evaluation is designed proportionately and effectively.

Evaluation Learning: interpreting results and making the best use of the evaluation findings includes writing for different audiences and the benefits and challenges of sharing results internally with institutional colleagues and leaders, and externally with the higher education sector.


Building an Evaluative Mind-set

Sheffield Hallam is committed to continuous enhancement and as such, takes responsibility for building an evaluative mind-set across the institution.  This will ensure that all activity and intervention is evaluated (process, impact or economics) and this supports evidence based decision making.

Specifically for under represented groups, our Access and Participation Plan states that our evaluation objectives are to: 

  • Develop evaluation capability and capacity across Sheffield Hallam University 
  • Develop evaluative mind-sets within the University 
  • Build a WP evidence base and a ‘what works’ compendium  
  • Extend the Class of 2020 longitudinal initiative as an effective process for engaging with increasingly diverse student cohorts 
  • Escalate innovative methodologies that examine differential outcomes, for example, through the expansion of our Digital Storytelling and Listening Rooms initiatives to engage hard to reach students and capture student voices, especially those which seek to challenge the norm or prevailing culture

STEER will be involved in meeting these objectives and applying the Standards of Evaluation Practice now adopted by the Office for Students.  This will also include:

  • a professional development offer for evaluation activity across Hallam
  • the development of an evaluation repository for sharing practice and outcomes

Office for Students Recommendations for Evaluation in HE

Designing and evaluating effective practices is a key considerations for Access and Participation work, and has resonance across our institution.

There are three levels of evaluation that represent the standards outlined by OFS.  These are as follows.

 

Narrative Evaluation: The impact evaluation provides a narrative or a coherent theory of change to motivate its selection of activities in the context of a coherent strategy.

Empirical Evaluation: The impact evaluation collects data on impact and reports evidence that those receiving an intervention have better outcomes, though does not establish any direct causal effect.

Causal Evaluation: The impact evaluation methodology provides evidence of a causal effect of an intervention.

The Office for Students have produced guidance documents for collecting evidence of impact.  These relate to Outreach, Access and Participation and Financial Support.

This work is also complimented by TASO, which has been created as a national body for supporting evidence and impact.

 


Evaluation Methods in Higher Education

STEER are currently developing an Evaluation Guide to support the design of effective evaluation across all areas of SHU.

In the interim, we strongly recommend the Research Leaders Impact Toolkit published by the Leadership Foundation in Higher Education.  This Toolkit features a publication on ‘Evaluating’.

The Research Leader’s Impact Toolkit resource and the supporting publications are for Leadership Foundation members only. Please click here for more details.

We also recommend the work of David Parsons, especially his most recent work ‘Demystifying Evaluation’.

You can also access STEER’s free webinar resources on optimising the use of evidence in higher education here.

The Scottish Framework for Fair Access has also produced Evaluation Guidance which is a useful provision of evaluation resources aimed at raising awareness of the importance of high quality evaluation.


Research and Evaluation Ethics

It is important that evaluations are designed and conducted ethically and all evaluators are clear about consent; transparency; right to withdraw; incentives; harm arising from participation; privacy and data storage; and disclosure.

STEER can offer ethical guidance and support on aspects of evaluation ethics within the institution.

@EthicsIR is a Hallam Guild Group which also aims to increase awareness and understanding of ethical principles involved in designing and conducting institutional research/evaluation at Hallam.

The processes for ethical approval at Hallam can be found here and include templates for consent and guidance on

You may also find it useful to consult the British Educational Research Association ethical guidance.

For more details on the distinction between research and evaluation within higher education you can read Austen (2018).


 

benchmarkBenchmarking Tools

The following links provide resources for those interested in benchmarking practice.  Benchmarking tools help you map current practice and develop aspirational practice.  You may want to consider which evaluation methods are needed to measure current practice and indicators of success in realising aspirations.

 

 


 

magnifyTypes of Evidence

The generation of an evidence informed rationale for an activity (event, initiative, intervention) is the first stage of effective evaluation.

 

The office for students has an A-Z of effective practice in access and participation which has begun to list contextual information.

If you are designing your own data collection methods, review the following Guides and examples produced by STEER:

In addition to primary data collection, consider the wide range of existing evidence which could be used to evaluate projects, initiatives or practices in HE. Austen (2018) provides and overview of some of the sources of evidence within an institution.

The TSEP Student Engagement Evaluation Framework (pg. 32-34) also provides a useful table of sources of evidence.

 


 

bulbSHU Examples

 

STEER are involved in various evaluations across the institution and externally including within widening participation and student outcomes.

You can read our thoughts here: Impact should matter to everyone

 

Pervious/current evaluations at Hallam include: pedagogical approaches (e.g. the impact of pass/fail approaches to level 4 module assessments); methodological approaches to capturing student voices (e.g. the effectiveness of the ‘listening room’ methodology); strategic ‘proof of concepts’ (e.g. Studiosity)  and institution wide initiatives (e.g. the impact of the GoGlobal initiative).

External work includes:

‘Evaluation of the National Teaching Excellence Scheme’, funded by the Office for Students.

Evaluation of the National Mixed Methods Learning Gain Project, funded by the Office for Students.

 


 

Support question

STEER are currently designing, leading and supporting evaluation activity across SHU including work within Faculties, Professional Services and initiatives aligned to SHU Strategic Priorities.

For more information about designing or conducting institutional evaluations please contact steer@shu.ac.uk

 

 

 

 

Images courtesy of PicserverPixabayMax Pixel, Pixabay, Pixabay, Pixabay, Max Pixel,