Support for Evaluation

Evaluation @ Hallam – What do I need to know?

Sheffield Hallam is committed to continuous enhancement and as such, includes a responsibility for building an evaluative mind-set across the institution.  This will ensure that all activity and intervention is evaluated (process, impact or economics) and this supports evidence informed decision making.

 

Sheffield Hallam’s Access and Participation Plan 20-24 has a detailed and robust evaluation strategy.  An overview of our strategic approach can be found here #1 Our Approach #2 Our Strategic Context.  You can also watch our short videos on an Introduction to Evaluation and Evidence and Impact: Your Evaluative Mindset

STEER aims to help staff and students at Hallam by:

  • Continuing to support staff working on strategic measure programmes with iterative and systematic evidence-informed programme design and implementation.
  • Developing training and development sessions for staff to develop capacity and capability concerning methodology and examples of ‘what works’.
  • Designing and implementing a new process of ethical approval to allow institutional service evaluations to be shared externally and inform ‘what works’ evidence bases.
  • Creating and piloting an evaluation repository to communicate evaluative outputs and raise awareness of evaluative practices.

Read through the following resources and links to find out more:

Activity/Intervention (Programme) Design: evidence informed approaches to activity design include rationale; measurement; definitions of success; logic chains/outcomes focused evidence; and critical review. You can read more about activity design here: #3 Programme (Activity/Intervention Design)

Evaluation Design: the design of the evaluation should be based on how you evidence impact – consider the OFS Standards of Evaluation below to see what type of evaluation is the most appropriate for your context. You can read more about evaluation design here: #4 Evaluation Design

Evaluation Implementation: the practicalities of data collection include a critical appraisal of the types of methods used, resourcing and risk management which will ensure that any evaluation is designed proportionately and effectively.

Evaluation Learning: interpreting results and making the best use of the evaluation findings includes writing for different audiences and the benefits and challenges of sharing results internally with institutional colleagues and leaders, and externally with the higher education sector.

A STEER produced checklist for working through evaluative practice can also be found here:STEER Evaluation Checklist. We recommend that teams consult this checklist as an idea for an activity is forming.

You can read also our thoughts here: Impact should matter to everyone


Where can I find out more?

STEER have created a series of short videos to support you with building an evaluative mindset.  You can access these videos, and accompany resources in our ‘Your Evaluative Mindset’ page.

We have also written guidance specifically on ‘Evaluating Learning and Teaching’.

You can also access STEER’s free webinar resources on optimising the use of evidence in higher education here. These were produced in collaboration with QAA Scotland.

We also recommend the work of David Parsons, especially his most recent work ‘Demystifying Evaluation’.

Parsons (2017) has identified 5 key steps for effective evaluation research design which STEER promote for all evaluative work.  These are:

  • Evaluation research is not incremental. It requires clarity of expectations and needs before all else
  • One size does not fit all. Good evaluation research design is always customised to some extent
  • Effective design is not just about methods. It needs to combine technical choices (contextualised, fit for purpose, robust) with political context (so it is understood, credible and practical)
  • Method choices are led by a primary dichotomy: consideration of measurement (how much) vs. understanding (how; why). The chosen method can do both, if required
  • Good design is always proportionate to needs, circumstance and resources

The Research Leader’s Impact Toolkit resource and the supporting publications are for Leadership Foundation members only. Please click here for more details.

The Scottish Framework for Fair Access has also produced Evaluation Guidance which is a useful provision of evaluation resources aimed at raising awareness of the importance of high quality evaluation.

TASO has been created as a national body for supporting evidence and impact in higher education and also has evaluation guidance and an evidence toolkit.

The UK Evaluation Society also has Good Practice Guidelines which include a focus on ethical evaluation, and self evaluation.


Why is evaluation important? 

Designing and evaluating effective practices is a key considerations for Access and Participation work, regulated and monitored by the Office for Students.  This has resonance across our whole institution.

There are three types of evaluation evidence that represent the standards outlined by OFS.  These are as follows.

 

Narrative Evaluation: The impact evaluation provides a narrative or a coherent theory of change to motivate its selection of activities in the context of a coherent strategy.

Empirical Evaluation: The impact evaluation collects data on impact and reports evidence that those receiving an intervention have better outcomes, though does not establish any direct causal effect.

Causal Evaluation: The impact evaluation methodology provides evidence of a causal effect of an intervention.

The Office for Students have produced guidance documents for collecting evidence of impact.  These relate to Outreach, Access and Participation and Financial Support.

This guidance is based on work by the Centre for Social Mobility, and is also complimented by TASO, which has been created as a national body for supporting evidence and impact.


magnifyEvaluation methods – How do I do it?

STEER are designing and piloting an Evaluation Repository which will house examples of evaluation conducted at Hallam.  It will be searchable by area of interest and methodology and will help others to design activities/interventions, plan evaluation and learn from the work of others.

The Office for Students has an A-Z of effective practice in access and participation which has begun to list contextual information which can be used to inform evaluation design.

TASO has an evidence toolkit which summarises the existing evidence on approaches to widening participation and student success for disadvantaged and underrepresented groups

If you are designing your own data collection methods, review the following Guides and examples produced by STEER:

In addition to primary data collection, consider the wide range of existing evidence which could be used to evaluate projects, initiatives or practices in HE. Austen (2018) provides and overview of some of the sources of evidence within an institution.

The TSEP Student Engagement Evaluation Framework (pg. 32-34) also provides a useful table of sources of evidence.

QAA Scotland have also published a useful set of resources and a guide for measuring intangible aspects of student experience (the things we value, but can’t easily measure).

To support good evaluation practice, have a look at our resources on Research and Evaluation Ethics.


Support question

STEER are currently designing, leading and supporting evaluation activity across Hallam including work within Colleges, Professional Services and initiatives aligned to strategic priorities.

For more information about designing or conducting institutional evaluations please contact steer@shu.ac.uk

 


Images courtesy of Pixabay, PicserverPixabayMax Pixel, Pixabay, Pixabay, Pixabay, Max Pixel,