Importance of Monitoring Assessment Record keeping and Reporting

Over the span of the previous 30 years diagnosis is a major target of educational conversation and research. It is generally accepted that evaluation is a vital part of the teacher's role which must be carefully considered. Corresponding to Kellough and Kellough (1999, p. 417),

"Coaching and learning are reciprocal operations that be based upon and affect one another. Thus, the evaluation component deals with how well the students are learning and exactly how well the educator is teaching".

Haydn (2009 cited in Capel et al 2009 site 329) defines evaluation as "those activities which can be undertaken by teachers, while others, to measure the effectiveness of coaching and learning". Although this is a fairly broad definition, it does allude to the wider importance of assessment. Assessment can be used to measure coaching and learning and notify future practice by individuals on various levels related to education:

Pupils - to recognize current achievement, attainment and areas for progression.

Parents / carers - can identify and aid student progression; on top of that understanding specific and college performance in comparison to national benchmarks.

Teachers - to recognize areas of durability and weakness in their pupil's skills and knowledge, thus informing planning, providing work of an appropriate challenge, covering the national curriculum and making sure progression.

Senior Leaders / Governors - to recognize schools regions of strength and areas of development with regard to national expectations and also to local issues and demographics thus informing the school development plan.

Government - may use a variety examination data and statistical evaluation to measure college performance, figuring out good / best practice or areas which might need closer monitoring and support.

The diagnosis process clearly provides an extensive range of uses for many individuals associated with education and colleges. It is important to break the diagnosis process down into two generally accepted strands, 'diagnosis of learning' (AOL) and 'analysis for learning' (AFL). AOL is characterised by the use of tests, focuses on and tests whereby pupils' get a fixed mark, level or exam end result (e. g. the end result of your GCSE exam) (O'Neill & Ockmore, 2006). In comparison, AFL is concerned more with the procedure of collecting information from pupils so that both they and the teacher can identify the current stage of learning taking place and for that reason highlight what needs to be achieved next for learning to continue (Evaluation Reform Group, 1999; 2002).

The purpose of AOL is to article on the success and attainment of pupils at confirmed time or stage of their learning (Harlen 2007), the term summative assessment is often used. Summative analysis refers to calculating the total total of learning at a given point in time. It uses end of issue, end of key level or end of qualification assessments to provide data which can be used to assess learning or even to compare performance to countrywide standards. These evaluations can be in various areas such as assessment to peers (normative examination), pupils past achievements (ipsative) or place criteria (criterion referenced e. g. National Curriculum level descriptors) (Driving and Butterfield 1990).

Clearly, assessing success encompasses ipsative diagnosis, assessing pupil's development form previous marks and assessments with their latest activity. This can be done at the learner, school and LA level. Students can record end of unit grades to comprehend and identify their own development made (this can be associated with AFL); the institution and Local Authority (LA) can use the evaluation as performance signals i. e. gets the child made appropriate 3 degrees of progress, is the institution adding value to the kid. Adding value is definitely an essential aspect; even though a kid might not compare well to nationwide averages, the child may have made significant improvement in particular areas demonstrating great ipsative advancements. For a college in a deprived area with fourth generation unemployment (such as my second university) this can show educators, students and LA that development and advancements are being made, even if these advancements are not shown in the schools exam results or little league tables.

Normative evaluation is the evaluation of students compared to their peers (Browning 1997). Normative analysis is used frequently by many in education: by colleges to place students in pathways / units, by teachers to differentiate work and provide task and by LA / Government to compare schools and sets of students. Normative analysis will require students to be taking the same evaluation. Potential problems happen via substitute / vocational qualifications or educator judgement and discretion of which exams students in their category may sit if the examination is not a school-wide concentration or policy.

Criterion referenced diagnosis is a common feature of most summative / AOL jobs now hired. Students are proclaimed and placed against a given criteria, a collection benchmark; this can be a mark program for a GCSE or National Curriculum level descriptors correlated to specific calendar year group for key level 3 (National Curriculum tests). Criterion referenced assessments are believed to be much fairer and even more objective than normative analysis (Dunn et al 2002) as all students are measured contrary to the criterion given. This however provides some negative and positive features for those included; teachers will become familiar with content to cover allowing adaption of teaching and experimentation to find best practice and preferably improve learning. Students should be aware of standards and expectations that they need to meet (linking with AFL) enabling focus and focus on be paid to particular areas. Both teachers and students can familiarise with class boundaries and dynamics of questions asked. Finally at a LA or Federal government level a target comparison can be produced from school to school, town to town or demographic to demographic. Alas the positives can have a poor aspect. Instructors and students may instruct and figure out how to the test resulting in a poor understanding of the subject but excellent marks being achieved by 'coached' pupils. This can lead to inaccurate or insignificant evaluations being created by LA or Administration and wrong inferences being made.

Coaching or instructing to the test is slightly of a topical ointment issue within keys level 2 and 3 education; with many schools choosing never to entre students into Standard Analysis Tests (SATS) by the end of key level 2 and 3 because of aforementioned issues. As mentioned 'instruction' for a test can provide weakness in subject matter knowledge leading to overinflated expectations of the pupil's ability leading to problems for professors and senior market leaders. Firstly teachers must retest or find true baseline information enabling effective provision and task of work; later complications can be created credited to students having over predicted predicted grades or potential. That is an obvious concern for senior control as college results may be affected by a child not achieving probable, a potential which may be unrealistic before involvement and provision. For mature leaders at key stage 2 and 4 the results could produce a rather insignificant league stand of results impacting reputation and absorption. Such problems want to be solved with alternative kinds of testing. THE CENTER Years Information System (MidYis) test is currently frequently used. Midyis tests are sat with no prior preparation and are made to "test capacity and aptitude for learning somewhat than success" (http://www. cemcentre. org/midyis). The testing are for sale to students at the end of key level 2 and 3 and also have good correlations with attainment. The test measures a whole variety of skills including reading, writing, maths, understanding and skills such as visualisation, stop counting and spatial consciousness. As no preceding preparation is included, educating to the test is removed.

Many colleges now deploy analysis points throughout the year to monitor development of pupils. There are plenty of means of collecting this data to monitor progression. Some colleges may use the discretion of the teacher to produce a level predicated on their professional judgement; this can be done for academics ability / accomplishments and also personal attributes such as effort level. Additionally data can be produced from blanket assessments across faculties for particular 12 months categories; or again with discretion of professors to provide an assessment for his or her particular classes. Both rationales have merits and drawbacks to be considered before improving with a specific methodology.

Indeed at key stage 4, summative assessments are mainly made by or with reference to qualifications being analyzed by pupils. Qualification boards supply the criteria to judge students objectively within cohorts studying the same qualification; however differences in qualification goals are vast. There is fantastic provision reserve for teenagers in an attempt to ensure every child can achieve, make a confident contribution and finally develop and be employed. Depending on demographics and catchment the provision made by the school will change greatly. A more affluent catchment area consisting of hired families with an increased respect for education is much more likely to encourage educational achievement and qualifications, such as triple award technology and GCSE certification; providing a far more academic qualification base instead of a more skills, practical basic. From limited experience these differences have been evident; with a far more affluent lead school supplying a wider range of academic certification and GCSE's compared to a more deprived second school which provided a whole host of alternate skills such as BTEC, OCR National or applied GCSE's. Indeed this shows engagement and choice from students but also shows the judgments which must be produced by schools to complement students to appropriate requirements and assessments.

Due to the nature of KS4 examination coaching to the test is no option. Skills structured, practical certification often require students producing evidence of meeting benchmark specifications or gaining experience. The more academic qualifications ask questions in a variety of ways, including wide open questions, forcing students to understand their subjects. You can find some discrepancy and controversy in data being produced at the end of key level 4 credited to GCSE equal qualifications. Schools currently use equivalents as a part of the info produced for Key level 4 A-C marks; numerous arguing that equivalents don't have the same rigour as GCSE's, a concern currently being analyzed at Government level by Teacher Alison Wolf on behalf of the Department of Education.

The data being produced at the end of key stage 4 via summative analysis plays a vital role in school performance. RAISEonline provides "evaluation of college and pupil performance data" with aims of "allowing institution to better personal evaluate, provide common data analysis for colleges, LA, inspectors and college improvement companions and better support coaching and learning"; it'll enable academic institutions "to examine context, attainment and value added data - explore hypotheses about pupil performance and moderate pupil focuses on" (https://www. raiseonline. org/About. asp). The themes mainly covered by RAISEonline are British, Maths and Research with some information on all GCSE content by the end of Key Stage 4. The doc provides a comparative of the institution relative to national expectations not only on attainment but also context and demographics. You'll be able to see ratio of pupils on free institution meals indicating deprivation, taken care of children, percentage of pupils from cultural backgrounds or that has Special Educational Needs position; all of which bring about assorted challenges for institutions. This allows universities to be placed into to context in conditions of pupils, catchment and demographics. Also prior attainment can be viewed as. Those colleges with a higher than average quantity of pupils achieving an even 4 (normally) and a lower than average achieving an even 5 at the end of Key Stage 3 will see it more challenging to achieve a higher percentage of A-C marks (much like second college experience). This may lead to more involvement being made for a larger variety of students on C/D borderline to help increase attainment levels.

RAISEonline uses an average point report (each level given an comparable number to allow for statistical trials) to give a picture of the attainment of pupils of all abilities. This can be used as mentioned above or it can be used to recognize groups which may need more assistance e. g. a high rating for GCSE factors but low A-C tips indicate many students received qualifications for GCSE or equivalents but only maintained 4 or less rather than 5. This may suggest that midsection ability students need to be extended further to accomplish a C or that equivalent opportunities could be better marketed as other classes normally attain more C marks. At the lead institution, average point report for British, Maths and Research is consistent with the nationwide average however A*-C rating is higher, this might suggest that middle capacity students are focused on more for the key subject matter with optional topics being better resourced and targeted to obtaining higher marks.

The final piece of RAISEonline data I will consider is the Contextual Vale Added (CVA) rating. CVA is a prediction of attainment that makes up about pupil backdrop, prior attainment, demographical or situational challenges (as stated). If the school has a high CVA score (as with second university) it would suggest that the institution is making very good improvement with students, this may well not be reflected in the average point rating for A-C grades; but catchment region of pupils will limit that potential. If a school is high in CVA it could suggest that the pupils are making more improvement than predicted in comparison to national average. If this isn't the truth, further exploration can be complete to ascertain areas which need more attention and support to improve attainment, thus impacting on the institution development plan; a location which should constantly evolve with the use of RAISEonline data. Over another few months and years RAISEonline itself is likely to be adapted and changed as current administration reviews changes how schools and education is assessed. The aforementioned Wolf Review will focus on vocational qualifications to find out that are sufficiently thorough and beneficial to students and financial society. The lately released White Paper - The Importance of Teaching is defined to change how academic institutions are assessed; with an inclusion of Science to English and Maths as a floor aim for. The newspaper also outlines the value of GCSE's and the movement away from vocational qualifications; rendering such vocational or different qualifications as outdated and ineffectual.

Within the exploration of summative evaluation, there have been improvements in my knowledge of the range of assessments (Q12 appendix 2), which have incorporated the analysis requirements for those who I have to instruct and their certification (Q11 appendix 1). The range is huge and importance must be put on suiting the university student to the certification, considering rigour of qualification, strengths of the college student and their aspirations. This is especially important for college student attainment and achievement and also institution performance. Due an excessive amount of the data produced in summative assessment being utilized to provide data for statistical analysis (Q13 appendix 3); its importance must not be undervalued. Having correct information on college student ability, school framework and issues can only serve as a benefit for teachers, an advantage that ought to be maximised. The information can be utilized coherently within the coaching and learning process, elevating standards and degrees of attainment.

The second strand of analysis, Examination For Learning (AFL), is very different to AOL. AFL is more worried about the process of collecting information from pupils so that both they and the instructor can identify the current level of learning occurring and therefore highlight what must be achieved next for understanding how to continue, alternatively than assess the training taken place Evaluation Reform Group, 1999; 2002). The term formative assessment is commonly used in place of AFL being explained by Bell and Cowie (2001) as "assessment which is supposed to enhance coaching and learning".

An analogy of earning soup represents the difference most plainly; whenever a soup has been prepared, the make tastes the soup, changing amounts of substances and identifying what must be added, this is formative diagnosis. If the soup is served and tasted by customer that is summative diagnosis (Guskey 2000, cited in Lund and Tannehill 2010 p86). AFL is usually an informal process, entrenched in all facets of coaching and learning (Dark colored et al 2003). As figure 1 illustrates, formative diagnosis can be best referred to as a continuing process that interlinks with other elements such as planning, instructing and learning (Casbon and Spackman, 2005)

Figure 1. The plan-teach-learn-assess circuit (cited Bailey, 2001 p141)

The pattern illustrated reflects the relentless strategy needed by professors to meet best practice. Assessment must be used in order for the professor to glean knowledge of ability and establish appropriately challenging work. This clearly links to likely to organise such work, allowing effective teaching and intensifying learning. As instructors glean this knowledge of the group, they often require baseline data to aid their own judgments. Diagnostic Evaluation (DA) is often used; it requires teachers examining pupils' progress against confirmed criteria (a probably summative task), allowing them to become prepared about their next steps in planning for effective learning (a essentially formative task). You can find question over DA forming a part of formative examination, summative diagnosis or being an individual, separate entity. For me, due to its very nature of ascertaining previous knowledge, advantages, weaknesses and skills to inform planning, it is an integral part of the examination for learning process. Indeed the DA may be an overlapping section of the 2 strands of assessment; completed in a far more summative or formative way, mainly depending on coaching style and preference.

Upon completion of DA, the continual circuit mentioned above will become an inherent focus for the educator. As emphasised by O'Neill & Ockmore (2006), evaluation shouldn't be viewed as a separate entity, but instead an imbedded principle as the other elements are reliant on this to build up and improve effectively. Making use of the levels ascertained in summative or diagnostic evaluation, the teacher can then improve learning and future attainment. An important paper by Black and Wiliam (1998) "Inside The Black Box; bringing up standards through class room assessment" confirmed formative assessment to have a pivotal role in boosting standards, particularly when students are positively mixed up in examination process; and the results of assessments are used to see planning. This information is invaluable to teachers. As mentioned via the training cycle examination is vital to see planning and so facilitating effective coaching and learning. By including students in to the process, focusing on quality of learning and responses, educational and learning expectations can be increased. Additionally, Black colored and Wiliam (1998) explored developmental areas for improvement; providing some indication of evidence to aid particular improvements of formative assessment technique; indicating that the procedure requires further improvements.

The Analysis Reform Group (ARG) attempted to provide such innovations; as a follow up to Inside The Black Pack, the ARG produced Assessment For Learning: Beyond The Dark colored Box (1999) figuring out five key factors in effective AFL:

providing effective opinions to pupils;

actively affecting pupils in their own learning;

adjusting teaching for taking account of analysis results;

recognise the impact of evaluation on pupil determination and self-esteem, both essential to learning

considering the need for pupils to be able to assess themselves also to learn how to improve.

In practice, for educators, the factors evidently allude to facilitating knowledge of what good learning or work is to students; they can therefore identify their level of learning, areas to develop and how to improve. The ARG (1999) also provided risk factors with regard to diagnosis, factors which undermine the AFL process and really should be avoided, these include;

having an emphasis on quantity and demonstration alternatively than valuing quality of learning;

lowering assurance / self-esteem by focusing on judgements and therefore not providing advice for improvement;

providing feedback to provide managerial / communal purposes alternatively than assisting pupils learn more effectively;

working with out a sufficient knowledge of pupil learning needs.

Building on the ARG's work and their own work in producing formative assessment Black colored and Wiliam et al (2002) produced Working Inside The Black Box; Examination For Learning In The School room; hoping to get where they had left off and further develop AFL pedagogy under four main headings:

Questioning

To develop teachers questioning skills, requesting important questions, allowing thought and response time for students, having follow up activities that are important, and lastly, only asking questions for which the educator requires information or that the students need to take into account. These are fairly simple points and an idea which may be considered and introduced to educating practice quickly and successfully; improving teaching and learning with a fairly immediate effect.

Peer and Self Assessment

Criteria for evaluating learning must be shared with and be clear to pupils; thus facilitating a definite overview of the goals of the task and what it means to be completed effectively. Pupils should be taught the practices and skills of self applied and peer diagnosis so that students will keep at heart the goals of the task and assess progress as they move forward. This will maybe allow pupils to build up learning in a distinctive way which can't be achieved in any other way. Pretty much this means sharing learning targets and effects of lessons and activities and making personal and peer diagnosis a process. Revisiting aims of learning and evaluating success can then be used to steer learning both for the instructor and by the college student, promoting self-reliance.

Feedback Through Marking

Written responsibilities should encourage pupils to develop and show knowledge of key features of the subject studied. Written opinions should identify regions of power, improvement and how to make that improvement; on top of that providing chance for the advancements to be made. And finally to be effective, feedback should activate thinking to occur. In an area often rushed, it is evidently invaluable for professors to consider their commentary. Providing effective responses empowers the student and allows for unbiased improvement; indeed, opportunities for learners to help make the improvements are vital. The effective feedback strategy is one which is often closely linked to peer and self evaluation. Having a steady methodology with clear criteria, making examination and feedback a habitual process, will assist all responses to be thought about and considered by the learner.

Formative USAGE OF Summative Tests

Summative lab tests should be observed to be always a positive part of the learning process. Pupils should be involved in a reflective way of the work they have done to revise effectively. This may and should require students preparing and marking questions within given conditions, to help them understand how the assessment process works and ways to boost. This can be linked tightly to peer and self evaluation and effective feedback once again promoting self possession to students.

It is important that professors consider the formative evaluation process and apply ideas effectively and frequently. Teaching and learning should go hand in hand emphatically underlying the need for an available, clear process, which involves the student to think and consider at every possible opportunity.

At the lead school a variety of activities have been employed as part of the formative assessment process; a lot of which reflect the aforementioned principles and features of AFL. There may be however a clear range for improvement; especially in providing a translucent criteria for students to comprehend in completing their work. This does not reflect instruction to the test as mentioned in summative evaluation, more of a knowledge of what's expected to classify excellent work and how expectations can be found. To improve practice it's important to habitually make use of the top features of good AFL approach; this will help students in becoming very alert to the training process and their own learning needs whilst allowing professors to facilitate the training to maximal result and efficiency.

Within my exploration of AFL there have been vast improvements to my pedagogy and practice of diagnosis. The formative diagnosis process has immense importance to ensure effective coaching and learning (Q12 appendix 2). The AFL process can encapsulate all the forms of diagnosis to help make the purpose more beneficial to the student; enhancing the learning pattern and thus educating and learning. Statistical data and national information (Q13 appendix 3) can be used to inform the AFL process; likewise AFL may be used to improve the summative process. In my opinion, AFL is one the most important aspect of classroom practice. It'll allow further knowledge of students and capacity, both by the university student themselves and the teacher; leading to better planning, coaching and learning. This may only better serve the scholar and facilitate advancements in attainment.

An area for development at the lead and second institution was the taking of AFL. Formative diagnosis can provide an even more all natural picture of a kid, the learning journey and performance comparatively to summative analysis. It takes out individualised reactions to the test condition and pressure felt at that time. Additionally, recording the formative assessment process enables educator reflections of the learning taken place; guaranteeing all range and content is protected whilst facilitating better planning and improving the productive learning cycle. This is an area which has little concentrate but could play a essential part in best practice for coaching and learning.

A especially important area for me is to ensure improved upon links between the formative and summative process. The summative process must be used in a more involved, learning centred way revitalizing thought and encompassing more AFL ideology. Thus making the learning journey more meaningful and interesting for students; ideally promoting confident, determined, independent, life long learners; of which all reach their full potential.

In realization, all forms of analysis have merits and advantages. It really is nevertheless the skill and understanding of the instructor, in choosing the right suited diagnosis for the task, student capability and goals of learning which is most essential. The diagnosis process for the scholar must be clear and informed allowing for though and representation thus facilitating higher benchmarks and superior learning.

Appendices

Appendix 1

Q11 - Know the assessment requirements and preparations for the subjects/curriculum areas in this ranges they can be trained to teach, including those associated with general population examinations and requirements.

Appendix 2

Q12 - Know a variety of methods to assessment, including the importance of formative evaluation.

Appendix 3

Q13 - Learn how to use local and nationwide statistical information to evaluate the potency of their coaching, to monitor the progress of these they teach also to raise levels of attainment.

Also We Can Offer!

Other services that we offer

If you don’t see the necessary subject, paper type, or topic in our list of available services and examples, don’t worry! We have a number of other academic disciplines to suit the needs of anyone who visits this website looking for help.

How to ...

We made your life easier with putting together a big number of articles and guidelines on how to plan and write different types of assignments (Essay, Research Paper, Dissertation etc)