Table of Contents

2017 Month : April Volume : 3 Issue : 1 Page : 11-14

EVALUATION OF UNIVERSITY QUESTION PAPERS IN MICROBIOLOGY FOR MBBS STUDENTS.

Asima Banu1, Presteena Mathew2

1Professor, Department of Microbiology, Bangalore Medical College and Research Institute, Bengaluru.
2Postgraduate Student, Department of Microbiology, Bangalore Medical College and Research Institute, Bengaluru.

Corresponding Author:
Dr. Asima Banu,
Professor,
Department of Microbiology
Bangalore Medical College and Research Institute,
Bengaluru, India.
E-mail: asima.banu@gmail.com

ABSTRACT

BACKGROUND

Assessment is a very important component of education. To correctly judge the knowledge and skill of the learners, assessment should be valid and should judge the appropriate levels of cognition.

The objective of the current study was to analyse the quality of question papers in undergraduate summative assessment in Microbiology based on content validity, levels of cognition, repetition of questions and mark distribution.

MATERIALS AND METHODS

A quantitative observational study of Microbiology theory question papers from 2011 to 2015 (5 years) was done. Content validity was evaluated by calculating weightage and levels of cognition were analysed using Modified Bloom’s taxonomy.

RESULTS

Analysis of question papers for cognitive domain showed that 8% of the questions tested factual recall (Bloom Level I), 11% tested data interpretation (Bloom Level II) and 2% questions tested critical evaluation (Bloom Level III). 79% of the questions did not belong to any of the cognitive domain. We also found that the questions on Applied Microbiology were less when compared to the other sections.

CONCLUSION

Accurate assessment of students’ knowledge is essential, especially in medical education. For this the question paper setters should follow the university guidelines and ensure that the weightage for different topics is maintained.

Keywords

Microbiology, MBBS, Content Validity, Medical Education.

How to cite this article

Banu A, Mathew P. Evaluation of university question papers in microbiology for MBBS students. Journal of Evolution of Research in Medical Microbiology 2017; Vol. 3, Issue 1, Jan-June 2017; Page: 11-14.

BACKGROUND

One of the most important job of a teacher is to find out how much the students have learned. This process is called assessment.1

Medical education is unique in terms of teaching and assessment. Assessment is a very important component, which is a process of testing a student’s ability or skill. It can be carried out by conducting examinations (written or oral) or other informal methods.2

There are 3 broad types of assessment instruments that are used in assessing medical students in the subject of Microbiology- written examinations, practical examination and viva-voce. Written examination questions are important in summative assessment in medical schools.

There is a prescribed structure and protocol of question setting for examinations in medical education programmes. But the extent to which this is being followed is variable. At present, questions are prepared casually just before examination and are not put through any quality check. This may lead to confusions and wrong understanding of questions by the students.

Framing of questions should be such that assessment becomes valid and assesses all the levels of cognition. The validity of a test is the degree to which a test measures what it

is supposed to measure.3 Without validity, assessments in medical education has little or no meaning. There are five categories of validity- Content, Concurrent, Predictive, Construct and Face validity.4 At a minimum, the assessment should include content validity that should always be carried out regardless of the type of assessment tool used.5 Content validity is a measure of the degree to which the assessment contains a representative sample of the material taught in the course. It is an important consideration in exams where we correctly want to judge the knowledge and skill of the learners.

 

Objectives

The study is an attempt to analyse the quality of question papers in undergraduate summative assessment in Microbiology based on content validity, levels of cognition, repetition of questions and mark distribution.

 

MATERIALS AND METHODS

This study was conducted at a medical college in South India. The Health University of this state conducts summative assessments twice a year for the undergraduate medical students in Microbiology. The students are judged by their performance in written examination, viva-voce and practical examination. The written examination has two papers- Paper I and Paper II, each carrying 100 marks.

Paper I includes questions on topics related to General Bacteriology, Immunology, Systemic bacteriology and Paper II contains questions on Virology, Parasitology, Mycology and Applied Microbiology.

 

Each Paper has a Total of 22 Questions which Includes

  1. 2 Long essay questions- 10 marks each.
  2. 10 Short essay questions- 5 marks each.
  3. 10 Short answer questions- 3 marks each.

 

Data Collection and Compilation

A quantitative observational study of Microbiology theory question papers from 2011 to 2015 (5 years) of the second year MBBS students affiliated to the Health University was done. Since each examination has 2 papers, a total of 20 papers were analysed using descriptive statistical analysis. Each paper has 22 questions and hence for 20 papers 440 questions were analysed.

 

Content Validity

Content validity was evaluated by calculating weightage. Two parameters were used to calculate weightage- Impact and Frequency.6

Impact refers to the perceived importance of a topic in terms of its impact on health.6 It is scored from 1 to 3 as follows-

1                     = Low clinical relevance.

2                     = Moderate clinical relevance.

3                     = High clinical relevance.

 

Frequency refers to the occurrence of a particular disease or health problem.6 It is also scored from 1 to 3-

1                     = Rarely seen.

2                     = Relatively common.

3                     = Very common.

 

Levels of Cognition

Each question was evaluated to measure the type of domain and the level of cognition was determined. The verbs which were used for these questions for behavioural objectives were used to categorise the questions into various domains. Starting from the simplest to the most complex these are: Knowledge, Comprehension, Application, Analysis, Synthesis and Evaluation. According to Modified Bloom’s hierarchy of cognitive learning, Level I comprised Knowledge, Level II included Comprehension and Application domains and Level III- Analysis, Synthesis and Evaluation.7

Level I- Covered knowledge and recall of information.

Level II- Covered comprehension and application, understanding and ability to interpret data.

Level III- Included problem solving, use of knowledge and understanding new circumstances.

This classification was done by checking the verbs for use in writing behavioural objectives (Table 1).

 

RESULTS

Analysis of question papers for cognitive domain showed that 8% of the questions tested factual recall (Bloom Level I), 11% tested data interpretation (Bloom Level II) and 2% questions tested critical evaluation (Bloom Level III). 79% of the questions did not belong to any of the cognitive domain, creating ambiguity (Fig. 1).

Weightage was calculated using Impact and Frequency scores. From this, the expected number of questions and average marks for each topic was calculated. This was compared with the mark distribution prescribed by the Health University. The following is shown in Tables 2 and 3.

As seen in the table, the marks allotted by the university correlates with the calculated marks for General Bacteriology, Immunology, Systemic Bacteriology and Virology. There is slight variation in the marks allotted for Mycology, Parasitology and Applied Microbiology, and must be rectified.

 

Distribution of Number of Questions in Various Topics across the Years

The number of questions asked in each topic for the 20 papers were analysed. The questions asked in each paper was compared with the calculated number of questions for each section according to Table 1 and 2. Methodology was adopted as according to Sultana R et al 2009.8 The following is represented in Table 4 and Figure 2.

Questions were checked for repetition and less than 1% repetition was noted in all the papers.

Level I

Level II

Level III

Knowledge

Comprehension

Application

Analysis

Synthesis

Evaluation

Define

List

Recall

Name

Recognise

State

Repeat

Record

Label

Diagnose

Tell

Transform

Convert

Distinguish

Estimate

Extrapolate

Manage

Deduce

Outline

Discuss

Describe

Explain

Identify

Translate

Restate

Recognise

Express

Locate

Report

Schedule

Sketch

Prepare

Modify

Predict

Examine

Classify

Invent

Generate

Compute

Demonstrate

Illustrate

Operate

Perform

Interpret

Apply

Employ

Use

Practice

Diagram

Inspect

Question

Relate

Solve

Prepare

Modify

Distinguish

Analyse

Differentiate

Compare

Contrast

Categorise

Appraise

Calculate

Test

Criticise

Assemble

Collect

Construct

Create

Organise

 

Diagnose

Propose

Design

Manage

Hypothesise

Summarise

Compose

Plan

Formulate

Arrange

Choose

Estimate

Measure

Argue

Decide

 

Evaluate

Compare

Assess

Justify

Judge

Appraise

Rate

Revise

Score

Select

Table 1. Verbs for Use in Writing Behavioural Objectives5

 

Topic

Impact Score (i)

Frequency Score (f)

i*f

Weightage

W = (i*f)/t

Number of Questions, as per Weightage (w*22)

Marks to be Allotted, as per our Study (w*100)

Marks Allotted as per the Health University

General Bacteriology

2

2

4

0.2105

5

21

20

Immunology

3

2

6

0.3157

7

31

30

Systemic Bacteriology

3

3

9

0.4736

10

47

50

Total (t)

 

 

19

 

 

 

 

Table 2. Weightage and Mark Distribution for Topics under Paper I

 

 

Topic

Impact Score (i)

Frequency Score (f)

i*f

Weightage

W = (i*f)/t

Number of Questions as Per Weightage (w*22)

Marks to be allotted as Per our Study (w*100)

Marks Allotted as per the Health University

Virology

3

3

9

0.3913

8

39

40

Mycology

2

2

4

0.1739

4

17

10

Parasitology

3

2

6

0.2608

6

26

40

Applied Microbiology

2

2

4

0.1739

4

17

10

Total (t)

 

 

23

 

 

 

 

Table 3. Weightage and Mark Distribution for Topics under Paper II

 

 

 

Generals

Immunology

Systemic Bacteriology

Virology

Mycology

Parasitology

Applied

Expected no. of questions as per our Study

5

7

10

8

4

6

4

2011 June

4

7

11

10

2

9

1

2011 Dec.

5

7

10

9

2

10

1

2012 June

6

6

10

8

5

9

0

2012 Dec.

6

6

10

8

3

10

1

2013 June

6

5

11

8

2

10

2

2013 Dec.

7

5

10

10

3

9

2

2014 June

8

5

9

9

5

6

1

2014 Dec.

7

5

10

10

5

6

2

2015 June

5

6

11

9

3

7

2

2015 Dec.

6

6

10

9

3

8

2

Table 4. Distribution of Number of Questions in Various Topics across the Years

 

 

Figure 1. Analysis using Modified Bloom’s Taxonomy


Figure 2. Distribution of Number of Questions

in Various Topics across the Years


DISCUSSION

Medical education is undergoing transformations in both teaching and assessment. Every effort should be made to use learner oriented methods, which would inspire cultivation of logical thinking, clarity of expression, freedom of judgment, scientific habits, problem solving skills and self-directed learning. The assessment methods, even written examinations shall be conducted to test the analysis and problem solving capacities of the students and not just factual recall. It was noticed that students who took up summative assessments were not satisfied by the question paper patterns. Hence, this evaluation was done to assess the scientific basis of paper setting.

Assessment has a dominant impact on the learning. Many studies have proved that it is the evaluation system rather than the educational objectives, curriculum that has a deep influence on what the students eventually learn. Hence, for fruitful learning of scheduled objectives the assessment must be designed to achieve the learning of the required objectives.

Bloom’s taxonomy divides the educational objectives in three domains- Cognitive, Affective and Psychomotor.5 In the present study, all the questions were classified according to Modified Bloom’s taxonomy into Level I, II and III. Level III is considered to be the most important in medical education, as it is concerned with problem-solving skills. Our study showed 8% belonged to Level I, 11% belonged to Level II and 2% to Level III category. Majority of the questions (79%) did not belong to any of the cognitive domain, which puts both the evaluator and student in confusion. Vinod et al found similar results with 8% Level I, 10% Level II, but 15% in Level III which was much higher than our findings.

Another key feature for assessment is content validity. In our study, we found that the questions on Applied Microbiology and clinical correlation were less when compared to the other sections. Content validity ensures that the examination covers the broad range of areas within the concept under study. Not everything can be covered, so items need to be sampled from all domains. This may need to be completed using a panel of “experts” to ensure that the content area is adequately sampled.

There was not much repetition of questions in all the papers, which indicates that the question paper setters were alert and rules were being followed.

In the present study, microbiology question papers evaluated were in agreement with the Health University guidelines with respect to syllabus, number of marks distributed and hours allotted, with few variations in the marks from different sections.

CONCLUSION

Accurate assessment of students’ knowledge is essential, especially in medical education. For this the question paper setters should follow the university guidelines and ensure that the weightage for different topics is maintained. Focus has to be given to assess the problem solving ability of the student rather than mere recall of information. If this is done, we can ensure better quality of assessment and outcome from the examinations.

 

REFERENCES

  1. Abatt FR. Teaching for better learning: a guide for teachers of primary health care staff. 2nd edn. Geneva: World Health Organisation 1989:107-109.
  2. Rohin G, Dhiraj S, Sushila S, et al. Analytical study of written examination papers of undergraduate anatomy: focus on its content validity. IJBAMR 2013;2(8):1110-16.
  3. McAleer S. Choosing assessment instruments. In: Dent JA, Harden RM, eds. A practical guide for medical teachers. Edinburgh: Churchill Livingstone 2001:303-13.
  4. Shumway JH, Harden RM. AMEE guide no.25: the assessment of learning outcomes for the competent and reflective physician. Med Teach 2003;25(6):569-84.
  5. VinodKumar CS, Suneeta K, Lakshmi RP, et al. Descriptive analysis of the MBBS microbiology question papers of RGUHS, Bengaluru. J Educational Res & Med Teach 2014;2(1):29-32.
  6. Patil SY, Gosavi M, Bannur HB, et al. Blueprinting in assessment: a tool to increase the validity of undergraduate written examinations in pathology. Int J Appl Basic Med Res 2015;5(1):76-9.
  7. Bloom B, Englehart M, Furst E, et al. Taxonomy of educational objectives: the classification of educational goals. Handbook I: cognitive domain. New York, Toronto: Longmans, Green 1956.
  8. Sultana R, Shamim KM, Nahar L, et al. Content validity of written examinations in undergraduate anatomy. Bangl J of Anat 2009;7(1):14-8.

 



 

Videos :

watch?v

Download Download [ PDF ] Download[ ABSTRACT ] Email Send to a friend