1. Trang chủ
  2. » Ngoại Ngữ

learning-gain-project-final-evaluation

68 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Final Evaluation of the Office for Students Learning Gain Pilot Projects
Tác giả Dr Camille B. Kandiko Howson
Trường học King’s College London
Thể loại report
Năm xuất bản 2019
Định dạng
Số trang 68
Dung lượng 1,04 MB

Cấu trúc

  • Appendix 1: Background information (53)
  • Appendix 2: Evaluation process (56)
  • Appendix 3: Project methodology (61)
  • Appendix 4: Evaluating validity and reliability (64)
  • Appendix 5: Student engagement approaches (66)
  • Appendix 6: Pilot project selection (68)

Nội dung

Background information

The final report evaluates the OfS learning gain pilot projects, which were initiated following a call for expressions of interest in March 2015 HEFCE allocated over £4 million to 13 pilot projects across more than 70 higher education institutions to assess learning gain measures in England In 2018, the OfS assumed management of these projects, which lasted between one and three years, with some extending their efforts through internal funding.

1.2 In addition to the pilot projects, there were separate strands of the learning gain programme These included:

The National Mixed Methodology Learning Gain Project, overseen by HEFCE, is a longitudinal study involving multiple institutions It integrates a critical thinking and problem-solving assessment with self-reflective questions that examine academic motivation, attitudes towards literacy and diversity, and various aspects of student engagement.

 the Higher Education Learning Gain Analysis (HELGA) project, an assessment of the potential application of national data sets to learning gain issue; and

 capacity building and networking events

In 2015, RAND Europe conducted an independent scoping study that gathered information on learning gain, which is broadly defined as the change in knowledge, skills, work readiness, and personal development This concept also encompasses the enhancement of specific practices and outcomes within particular disciplinary and institutional contexts, as discussed in Sections 2.2 – 2.3 of the report.

1.4 On the basis of the scoping study, five broad approaches to measuring learning gain were identified, and were tested and analysed through the pilot projects:

Grades serve as a benchmark for assessing student progress by comparing differences in academic performance at two distinct points in time This evaluation can be conducted through standardized measures like GPA or by analyzing a specific set of grades.

(standardised or not) to predict future grades

 Self-reporting surveys – asking students to report the extent to which they consider themselves to have gained knowledge and developed skills, through

108 Evaluation of HEFCE’s learning gain pilot projects: Year 1 report https://webarchive.nationalarchives.gov.uk/20180322111250/http://www.hefce.ac.uk/pubs/rereports/y ear/2017/lgeval/

109 Evaluation of HEFCE’s learning gain pilot projects: Year 2 report https://www.officeforstudents.org.uk/advice-and-guidance/teaching/learning-gain/learning-gain-pilot- projects/

110 Rand Europe report on Learning Gain: https://webarchive.nationalarchives.gov.uk/20180319114205/http://www.hefce.ac.uk/pubs/rereports/y ear/2015/learninggain/

54 a survey administered at a number of points throughout their degree programme

Standardized tests evaluate the acquisition of specific generic or specialized skills and can be administered to students as part of their formative or summative assessments for their degree, or as supplementary exercises alongside their courses.

 Other qualitative methods – including encouraging students to reflect on their learning, acquired skills and remaining skills gaps, and to facilitate a formative exchange between students and their tutors

 Mixed methods – using a combination of methods and indicators to track improvement in performance, for example through a combination of grades, student learning data and student surveys

1.5 Coinciding with the projects being transferred from HEFCE to OfS, the learning gain programme has evolved to also support the objectives of OfS, namely experience and outcomes 111

Course-level measures of learning gain are central to various international initiatives, including the European Commission's support for the CALOHEE project within the Tuning framework While this project is progressing, its emphasis is on aligning course design frameworks rather than directly addressing student outcomes Additionally, national research projects in countries such as Germany, Brazil, Italy, and Colombia are examining student learning outcomes, highlighting issues related to student engagement, the need for a broader focus across higher education sectors, and practical implementation challenges.

The AAC&U VALUE project in the US provides rubrics for the external assessment of students' in-course assignments, aligning with nationally standardized learning outcomes This initiative, led by academic staff and supported by extensive institutional commitment, is resource and time-intensive However, investing in large-scale, subject-based learning outcome initiatives suggests a higher potential for effective use, including opportunities for international comparison.

1.8 Several years ago, the Organisation for Economic Co-operation and Development

The OECD conducted a feasibility study on the Assessment of Learning Outcomes in Higher Education (AHELO) across various countries and disciplines This initiative encountered challenges regarding the criteria for measurement, particularly in relation to international and cultural differences.

111 https://www.officeforstudents.org.uk/about/our-strategy/

113 Modelling and Measuring Competencies in Higher Education http://www.kompetenzen-im- hochschulsektor.de/index_ENG.php

114 https://link.springer.com/article/10.1007/s10734-015-9963-x

115 https://link.springer.com/article/10.1007/s40888-017-0075-1

116 http://www.tandfonline.com/doi/abs/10.1080/02602938.2016.1168772?journalCodeh20

117 Drezek McConnell K, Rhodes TL, 2017, ‘On solid ground’, VALUE report 2017 Washington, DC: Association of American Colleges and Universities

55 and subject-level differences emerging Due to concerns about data quality and use, the project was not continued 118

118 OECD, 2013a, ‘Assessment of higher education learning outcomes Feasibility study report Volume 2: Data analysis and national experiences’ OECD http://www.oecd.org/education/skills- beyond-school/AHELOFSReportVolume2.pdf

OECD, 2013b, ‘Assessment of higher education learning outcomes AHELO Feasibility study report Volume 3: Further insights’ Retrieved from http://www.oecd.org/education/skills-beyond- school/AHELOFSReportVolume3.pdf

Evaluation process

The independent evaluation of pilot projects, funded by the learning gain programme, aims to identify best practices and provide supporting evidence for measuring learning gain in the English higher education sector This evaluation assesses the effectiveness of various methods piloted, offering insights into their strengths and weaknesses, and provides recommendations for their application across the sector.

The evaluation aimed to assess the success of learning gain projects in relation to their objectives, analyze the progress and outcomes of each pilot project against specific success criteria, and examine the effectiveness and challenges of various learning gain methods in England Additionally, it sought to monitor pilot project developments to identify emerging themes and issues, highlight knowledge gaps requiring further investigation, and share evaluation findings with both project stakeholders and a broader audience Ultimately, the evaluation results will inform recommendations for the Office for Students (OfS) to guide future government policies on learning gain.

The evaluation process for the diverse projects is conducted on two main levels: assessing each project's specific success criteria and applying an overarching evaluation framework These approaches work together iteratively The evaluation framework emphasizes four key areas of focus, with additional details about the individual projects available on the OfS website.

 Development of a measure/proxy of learning gain o What approach was used? o How was learning gain measured?

 Robustness and effectiveness o Validity and reliability o How many students were involved? o How did the project develop over time? o How was the measure of learning gain judged and assessed?

119 https://www.officeforstudents.org.uk/advice-and-guidance/teaching/learning-gain/

The feasibility of the measure is crucial, considering its practicality and value for money It is essential that the measure resonates with students, academics, and other stakeholders, ensuring its relevance and acceptance Additionally, the measure should effectively support students and enhance the overall teaching and learning experience.

 Scalability o Was data and information shared across institutions? o Was/is the measure replicable across disciplines, student groups and at other institutions?

The initial phase of the evaluation framework emphasizes the theoretical and practical aspects of measuring learning gain This involves addressing fundamental philosophical questions regarding the purpose of higher education and the motivations behind assessing learning outcomes Understanding these elements is crucial for developing effective measures of learning gain.

The next step focuses on transforming theoretical concepts into practical measures that can be empirically tested and developed This process also considers the context in which projects are implemented, targeting specific student demographics, subject areas, geographic regions, or types of institutions In Year 1 of the evaluation of the pilot projects, the primary emphasis was on developing measures of learning gain.

The second stage of evaluation focuses on assessing the robustness and effectiveness of measurements, emphasizing the importance of reliability and validity This stage builds upon the rationale of what is being measured, ensuring that the evaluation process is both thorough and credible.

Reliability refers to the consistency and accuracy of a measurement, while validity encompasses theoretical, practical, and technical dimensions Theoretically, validity assesses if a measure conceptually aligns with its intended purpose, often through qualitative research involving students and stakeholders Practically, it examines whether the metric effectively measures what it claims to, utilizing both qualitative and quantitative methods Additionally, technical considerations are crucial for evaluating validity, especially in the development of survey items and scales.

When developing learning gain measures, it is crucial to consider the context to ensure validity and reliability This involves examining how measures are tested across specific student groups, subject areas, regions, or institutional types The representativeness of sample populations and respondents also plays a significant role Furthermore, the evaluation of validity is influenced by the intended purpose of a measure; for instance, while assignment grades can effectively differentiate student gains within a module, they may not serve as a valid measure in other contexts.

120 Evaluation of HEFCE’s learning gain pilot projects: Year 1 report https://webarchive.nationalarchives.gov.uk/20180322111250/http://www.hefce.ac.uk/pubs/rereports/y ear/2017/lgeval/

58 comparison across institutions Robustness and effectiveness were major areas of focus for the Year 2 evaluation of the pilot projects 121

The third stage of evaluation, known as suitability, assesses contextualised validity through feasibility and usability, building on the robustness and effectiveness of measures while drawing lessons from broader projects and research This analysis weighs the potential benefits and consequences of using these measures, emphasizing that suitability is context-dependent rather than absolute Additionally, the unit level of analysis poses challenges for various projects; for instance, Birmingham City University concentrated on student-level outcomes of the CLA+ test, which was primarily designed for institutional use, while the University of Reading found it invalid for assessing individual student progress.

Embeddedness refers to the extent to which project outcomes influence the activities of academic staff, professional staff, students, and institutional leadership This degree of embeddedness can differ across various project strands, within different departments of an institution, and among project partners, highlighting the varied impact of data utilization, such as when it is primarily accessed by the careers office.

When evaluating educational measures, it is crucial to consider their feasibility and relevance to students, staff, and stakeholders, as well as their potential to support student success and improve teaching and learning Practicality plays a key role, as projects must effectively define and pilot learning gain measures, encourage student participation in assessments, and analyze and report on the results While some projects have tested extensive instruments with hundreds of items to gather robust data, the challenge remains that such lengthy assessments may lead to low student completion rates.

Usability examines the practical application of measures within institutions and their potential uses beyond, directly linking to the validity of learning gain metrics The context and integration of these measures significantly influence the evaluation of their suitability In the evaluation's second and third years, the focus was primarily on assessing both the suitability and usability of these metrics.

The concept of value for money received minimal engagement, yet qualitative feedback indicates that students appreciate the opportunities for self-development and the skills acquired Further efforts are necessary to effectively translate these insights into a comprehensive assessment of value for money.

Scalability involves analyzing data and evaluating the effectiveness of measurement approaches in relation to learning gains It considers the intended purposes of these measures, the target audiences, and the specific contexts concerning students, subjects, and institutions By examining the initial evaluation stage, scalability assesses the potential applications of metrics to enhance educational outcomes.

121 https://www.officeforstudents.org.uk/media/1386/evaluation-of-hefce-s-learning-gain-pilot-projects- year-2.pdf

Project methodology

The Year 2 evaluation report and project case studies on the OfS website provide detailed insights into project methodologies Among the 13 funded projects, 10 received three-year funding to develop measures and track student progress over time Plymouth University's two-year project utilized a longitudinal design, while Ravensbourne's one-year project employed a cross-sectional approach, leveraging extensive historical data The University of East Anglia's two-year project also adopted a cross-sectional design, incorporating testing at multiple time points Additionally, two of the three-year projects, from the University of Warwick and the Careers Group, included a cross-sectional component in their methodologies.

The pilot projects employed a variety of methods, integrating newly collected data with secondary analyses of existing institutional data This comprehensive approach included entry data, student demographics, characteristics, and metrics related to student progress, continuation, and attainment, such as grades.

Due to challenges with project start times and student engagement, several projects had to adjust their methodologies, which included eliminating data collection points in Year 1 or adding new ones in Year 2 Longitudinal projects that tracked individual students over time incorporated a cross-sectional approach by monitoring incoming student cohorts to compensate for low engagement in Year 1 While this adaptation enabled the collection of sufficient data to evaluate instruments, it significantly complicated the analysis and findings, making it difficult to draw definitive conclusions regarding their usefulness and scalability.

Two projects, The Open University and the Careers Group, analyze student data across various cohorts and over time Additionally, the University of Reading examines several years of existing student data, while Ravensbourne conducts an analysis of DLHE data.

Learner analytics encompasses the collection, analysis, and reporting of data regarding students and their educational contexts, primarily through secondary data analysis This includes examining entry scores alongside final grades, evaluating how participation in a module's Virtual Learning Environment (VLE) influences assessment outcomes, and identifying achievement gaps among various student demographics Educational institutions utilize learner analytics to uncover trends and patterns, such as disparities in attainment based on gender or ethnicity, as well as the effects of dissertation submissions on grades, which can subsequently be investigated using qualitative methods.

The Open University's project focuses on gathering data through satisfaction surveys (affective measures), Virtual Learning Environment (VLE) metrics (behavioral measures), and academic performance assessments (cognitive measures) Key behavioral elements analyzed include class attendance, participation in discussion forums, engagement in chats, and essay submissions.

123 https://www.officeforstudents.org.uk/advice-and-guidance/teaching/learning-gain/learning-gain- pilot-projects/

A total of 62 evaluations were conducted, correlating this data with central records to assess student characteristics and entry measures Comparable analyses were performed in projects at the University of Reading and the University of East Anglia, focusing on existing student characteristics, progression, and academic performance data.

Analyzing large datasets presents the challenge of identifying meaningful patterns and trends, as well as areas for further exploration While interesting findings may reveal relationships within the data, they often lack explanations for these connections, necessitating additional qualitative analysis This approach can be beneficial for institutional improvement, though it may be less effective for assessing accountability in learning gains.

The projects focused on collecting new data aim to analyze specific student groups and track them over time Future initiatives may involve analyzing entire cohorts to establish benchmark data, particularly concerning entry scores and outcome modeling However, none of the projects that planned this analysis were able to complete it within the designated timeframe Unlike those utilizing existing data, these projects have a clear area of inquiry, yet they face the challenge of gathering enough data to ensure generalizability across various student characteristics, subjects, and institutional types.

Four projects utilize survey data alongside secondary institutional data to enhance understanding of student progression The Careers Group project integrates a brief survey within the student registration process, allowing for comparisons across institutions despite slight variations in questions Similarly, The Manchester College project combines UKES data with institutional secondary data, concentrating on the profiles and pathways of students in further education settings.

1.10 The University of East London project combined survey data including scales on

The University of Portsmouth's project emphasizes the importance of non-cognitive skill development by utilizing self-reported questionnaire data, including the Approaches and Study Skills Inventory for Students (ASSIST) and Dweck’s Implicit Theories of Intelligence It aims to assess the relationship between Need for Cognition, Academic Behaviour Confidence Scales, and academic performance, while also incorporating partial administration of the UK Engagement Survey (UKES) alongside secondary institutional data on socio-demographics, Widening Participation, non-continuation, and attainment Additionally, the project focuses on creating new psychometric tools to effectively evaluate the growth of non-cognitive skills among students.

The University of East Anglia project includes a focus on self-efficacy assessments in economics, which are examined in conjunction with secondary data on student performance, specifically measured through GPA.

1.12 Tests and surveys Half of the projects combine survey data with a standardised test

Birmingham City University has implemented the CLA+ test but faced challenges in achieving adequate response rates when attempting to connect it with UKES data In contrast, the University of Reading's project emphasizes analyzing existing student demographics and average marks, supplemented by primary data from CLA+, UKES, and internal wellbeing and careers surveys The external nature of the CLA+ test enhances the research efforts of both institutions.

63 administered, the projects were somewhat constrained in their data collection due to the availability of testing windows and the cost of administration

The University of Cambridge, in collaboration with the University of Warwick, developed a survey instrument to measure learning gain in higher education across various disciplines Meanwhile, the University of East Anglia focused on creating discipline-based concept inventories, akin to standardized tests administered at multiple intervals, which are increasingly popular in fields like physics Initially trialed in chemistry and biology, these inventories were later expanded to include pharmacy.

The University of Manchester developed a Competence Scale for a standardized Critical Reasoning Skills test, which is complemented by questionnaires assessing factors such as disposition, transition, support perceptions, pedagogic practices, and confidence in learning outcomes, all connected to existing entry and attainment data Meanwhile, the University of Lincoln's project focuses on integrating data from a Situational Judgment Test (SJT) with information on student training participation, democratic engagement in the student union, work experience, extracurricular activities, and secondary academic performance data, emphasizing Widening Participation.

Evaluating validity and reliability

All projects assessed the effectiveness of various learning gain measures based on their specific success criteria The validity and reliability of these measures can be evaluated through statistical methods, stakeholder interviews, and comprehensive analysis of the results.

Content validity assesses how well a measure encompasses all aspects of a construct, ensuring that tests reflect the actual material taught rather than irrelevant questions It employs statistical methods to include only meaningful elements, while face validity involves qualitative evaluations, such as student feedback on the design and usefulness of a test or survey When developing assessment instruments, it is crucial to clearly define the intended measurement and verify its effectiveness.

Construct validity in project development was assessed through various methods, including interviews with higher education students and discussions with panels of experts, managers, and experienced tutors from diverse courses and disciplines across institutions.

Concurrent and predictive validity are essential for assessing learning gain, as many projects employ various methods to measure this improvement By analyzing and triangulating data from different approaches, researchers can evaluate the effectiveness of each measure For instance, a lack of correlation between test scores and grades indicates that these methods may not be capturing the same learning outcomes Additionally, predictive validity examines the relationships between new learning gain measures and established objective assessments.

External validity is a crucial consideration in projects collecting new data from students, as there is a risk that more engaged students may be overrepresented, potentially skewing results To address this concern, statistical controls can be implemented based on a comprehensive analysis of the entire student population Additionally, incorporating projects that include all students ensures a more representative sample Furthermore, the diversity of subjects chosen across various projects enhances the external validity of the measures employed, providing a more accurate reflection of the broader student experience.

Internal validity examines whether the questions in a test or survey accurately reflect the desired outcomes This assessment involves analyzing each instrument used in the projects, primarily through statistical testing of the results.

125 Cozby PC, 2001, ‘Measurement Concepts’ Methods in Behavioral Research (7 th ed.) California: Mayfield Publishing Company.

126 Cronbach LJ, 1971, ‘Test validation’, in Thorndike RL (Ed.), Educational Measurement (2nd ed.) Washington, D C.: American Council on Education.

127 Litwin M, 1995, ’How to Measure Survey Reliability and Validity’ Thousand Oaks, CA: Sage Publications

65 this analysis is still underway to measure the effectiveness of measures for longitudinal analysis

The use of learning gain metrics in higher education could have substantial consequences across the sector, similar to the effects seen in the schools sector where 'teaching to the test' has influenced educational practices High-stakes outcome measures can significantly shape teaching methods, raising concerns about the potential repercussions of implementing these metrics on the assessment landscape in higher education.

Reliability is a key indicator of consistency, reflecting a measure's ability to yield similar results under stable conditions Ongoing projects aimed at developing new instruments are performing reliability tests, which include factor analysis, the Rasch measurement framework, and test reliability theory, while also modifying instruments between evaluation waves These projects actively engage in focus groups and interviews with students, staff, parents, and employers to assess both the validity and reliability of the instruments used.

Student engagement approaches

1.1 Compulsion The project with the most success in engaging students was the

At Birmingham City University, completing survey questions was a mandatory requirement for students' annual course registration, resulting in high participation rates across various subjects and demographics Notably, the institution that communicated the survey as compulsory and provided minimal advance notice achieved the highest attendance rates among its partners in the project.

Engaging student-facing staff significantly enhances student involvement in educational projects Research from Plymouth University highlights that the support of program leads and lecturers is crucial for fostering student engagement across partner institutions Similarly, Birmingham City University identified that the relationship between students and key academics involved in their programs plays a vital role in student recruitment While many projects experienced high engagement levels where dedicated frontline staff were present, the University of East Anglia noted that success in one area does not guarantee similar outcomes in different institutions.

Scheduling testing and surveying sessions in dedicated rooms has proven to be an effective strategy for increasing completion rates The University of Lincoln observed that students performed better in workshop conditions compared to simply sending emails for self-completion Similarly, the University of Reading reported higher engagement levels when testing rooms were reserved for student use.

Incentives were generally found to be ineffective in encouraging student participation, with many students completing tests and surveys driven by personal interest rather than the incentives offered While some projects experienced limited success with incentives, Birmingham City University highlighted the contentious nature of their effectiveness Notably, small incentives, such as printer credit, proved to be more successful in promoting engagement compared to prize draws.

1.5 Projects tried numerous different approaches to incentives Plymouth University used

To enhance student participation, various institutions implemented incentives such as Amazon vouchers, prize draws, and printer credits At the University of Lincoln, a £10 printer credit was offered to students who completed all project elements, alongside a £100 prize draw, addressing initial recruitment challenges Similarly, the University of Portsmouth project faced low engagement and responded by providing raffles and personalized reports Institutions emphasized the connection between the project and employability awards, further encouraging student involvement through tailored feedback and incentives.

1.6 Students’ union engagement Several projects engaged with their students’ union

The University of Reading includes a students’ union representative in its steering group, while the University of Manchester collaborates with its students’ union to enhance project development and tools Additionally, the University of Lincoln has partnered with its students’ union to gather extensive data on student experiences.

67 activities and engagement While this has been helpful, engaging with the students’ union did not seem to help boost student engagement in the projects or raise response rates

Ngày đăng: 20/10/2022, 23:15

w