An Investigation On The Usage Of Computer Based Test On The Performance Of Secondary Schools Students In Nigeria
₦5,000.00

AN INVESTIGATION ON THE USAGE OF COMPUTER BASED TEST ON THE PERFORMANCE OF SECONDARY SCHOOLS STUDENTS IN NIGERIA

CHAPTER TWO

REVIEW OF LITERATURE

INTRODUCTION

Although the primary uses of microcomputers in education are instructional and administrative, the expansion of computer technology has created many possibilities for computer applications in the area of testing and assessment. McBride (2005) anticipated large-scale applications of computerized testing as computers decreased in cost and became more available. Many important issues have to be considered when administering tests by computers. Among these are the equivalence of scores obtained in computerized testing compared with conventional paper-and-pencil tests, and the impact of computerization on the test-taker. This chapter discusses these issues as well as the current applications of the computer in testing, advantages and disadvantages of computerized testing, and the effects of administering tests via the computer.The chapter intends to deepen the understanding of the study and close the perceived gaps.Precisely, the chapter will be considered in three sub-headings:

 Conceptual Framework

 Theoretical Framework

 Chapter Summary

2.1 CONCEPTUAL FRAMEWORK

COMPUTER

A computer is an electronic device that manipulates information, or data. It has the ability to store, retrieve, and process data. You may already know that you can use a computer to type documents, send email, play games, and browse the Web. You can also use it to edit or create spreadsheets, presentations, and even videos. Computer also has two component which is hard ware and soft ware.

Hardware: is any part of your computer that has a physical structure, such as the keyboard or mouse. It also includes all of the computer's internal parts.

Software: is any set of instructions that tells the hardware what to do and how to do it. Examples of software include web browsers, games, and word processors.

APPLICATIONS OF THE COMPUTER IN TESTING

The computer is currently being used in many areas of testing and assessment. In addition to the already established uses of computers for test scoring, calculation of final grades and test score reporting, computers can also be used for the determination of test quality, test item banking and test assembly, as well as for test administration.

TEST AND ITEM ANALYSIS

Assessing test quality generally involves both item and test analysis. Classical statistics used to summarize item quality are based on difficulty and discrimination indices; these are calculated more easily and quickly with the use of the computer than by traditional hand methods. Items which have been inadvertently miss-keyed, have intrinsic ambiguity, or have structural flaws such as grammatical or contextual clues that make it easy to pick out the correct answer, can be identified and culled out. These items are characterized by being either too easy or too difficult, and tend to have low or negative discrimination. Test analysis can also provide an overall index of reliability or internal consistency, that is, a measure of how consistently the examinees performed across items or subtests of items. Christine (2011)

ITEM BANKING AND TEST ASSEMBLY

Another important use of the computer in testing has been the creation and maintenance of an item pool. This is known as item banking. Hambleton (2010) defines an item bank as "a collection of test items uniquely coded to make the task of retrieving them easier. If the items are not categorized, they are merely a pool or collection of items, not an item bank." Hambleton (2010)In the use of item which are an alternative to item banks, algorithms are used for randomly generating test items from a well-defined set of item characteristics; each item is similar in structure. For instance, items might have a multiple-choice format, a similar stem, the same number of answer choices, and a common pool of distractors. The most important advantage gained from storing item forms is that many more items can be produced by the microcomputer that would be reasonable to store on the microcomputer (Millman & Outlaw, 2008). With the availability of item forms, unique sets of test items can be developed and drawn for each examinee. Such a feature makes it feasible to administer different tests of the same content and domain to students at different times. one of the principal advantages of microcomputer-based test development is the ease with which test assembly can be done with the appropriate software. Desirable attributes of an item banking and test assembly system include easily retrievable items with related information, an objective pool, automatic generation of tests, analysis of item performance data, and automatic storage of that data with the associated items (Hambleton, 2004).

TEST ADMINISTRATION

The computerized administration of tests has also been considered as an attractive alternative to the conventional paper-and-pencil mode of administration. In a computerized test administration, the test-taker is presented with items on a display device such as a cathode-ray tube (CRT) and then indicates his or her answers on a response device such as a standard keyboard. The presentation of test items and the recording of the test-taker's responses are controlled by a computer. Most of the attention to computerized test administration however, has been directed towards psychodiagnostic assessment instruments such as psychological tests and personality inventories. Even in the case of education-related ability and achievement tests, testing (as part of computer-assisted instruction or computer-managed instruction) has mostly been used as the basis for prescribing remedial instructional procedures to determine if the student has achieved mastery, and also to provide the student with some feedback of how he or she performed. Christine (2011)

Four main computer-administered testing procedures used in educational assessment settings include computer-based testing, computer adaptive testing, diagnostic testing and the administration of simulations of complex problem situations. Computer-based testing (CBT) generally refers to "using the computer to administer a conventional (i.e. paper-and-pencil) test" (Wise & Plake, 2009). That is, all examinees receive the same set of test items. Unlike conventional testing where all test-takers receive a common set of items, computer adaptive testing (CAT), or "tailored testing", is designed so that each test-taker receives a different set of items with psychometric characteristics appropriate to his or her estimated level of ability. Aside from the psychological benefits of giving a test that is commensurate with the test- taker's ability, the primary selling point of adaptive testing is that measurements are more precise when examinees respond to questions that are neither too hard nor too easy for them (Millman, 2004). This test involves making an initial ability estimate and selecting an item from a pool of test items for presentation to the test-taker. According to Green, Bock, Humphreys, Linn, & Reckase (2004), each person's first item on an adaptive test generally has about medium difficulty for the total population. Those who answer correctly get a harder item; those who answer incorrectly get an easier item. After each response, the examinee's ability is re-estimated on the basis of previous performance and a new item is selected at the new estimated ability level. The change in item difficulty from step to step is usually large early in the sequence, but becomes smaller as more is learned about the candidate's ability. The testing process continues until a specified level of reliability or precision is reached and the testing process is terminated. This testing is based on Item Response Theory "which provides the mathematical basis for selecting the appropriate question to give at each point and for producing scores that are comparable between individuals" (Ward, 2004). Adaptive testing allows the tailoring of the choice of questions to match the examinee's ability, bypassing most questions that are inappropriate in difficulty level and that contribute little to the accurate estimation of the test-taker's ability.

Another promising use of computer-administered testing is in the area of diagnostic testing. McArthur and Choppin (2004) describe the approach to educational diagnosis as "the use of tests to provide information about specific problems in the performance of a task by an individual student, information that will point to some appropriate remedial treatment".Diagnostic testing is based on the identification and analysis of errors exhibited by students. Analysis of such misconceptions can provide useful information in evaluating instruction or instructional materials as well as specific prescriptions for planning remediation for a student. Research in this area has mainly been in mathematics education. According to Ronau (2006), "a mistake is an incorrect response, whereas an error is a pattern of mistakes indicating a misunderstanding of a mathematical operation or algorithm”. It is believed that a student's systematic errors, which are commonly known as "bugs" are not random but rather are consistent modifications of the correct procedure. The microcomputer has been used to provide a rapid analysis of errors and a specification of the errors that a particular student is making.a current application of computer-administered testing is in the presentation of branching problem simulations. this method however, is not used widely in educational settings but rather in medicine and other health-related fields in professional licensing and certification testing Christine (2011).

ADVANTAGES OF COMPUTERIZED TESTING

The potential benefits of administering conventional tests by computer ranges from opportunities to individualize assessment, to increases in the efficiency and economy with which information can be manipulated. Several of these advantages offered by computerized test administration over printed test administration have been described by Ward (2004), Fletcher & Collins (2006), and Wise & Plake (2009). Much of educational testing has traditionally been managed on a mass production basis. Logistical considerations have dictated that all examinees be tested at one time. The computer as test administrator offers an opportunity for more flexible scheduling; examinees can take tests individually at virtually any time. During testing, examinees can also be given immediate feedback on the correctness of the response to each question Computer-based tests, and particularly computer adaptive tests, have been shown to require less administration time than conventional tests. For example, using achievement tests with third and sixth graders, Olsen et al reported that the computerized adaptive tests required only one-fourth of the testing time required by the paper-and-pencil administered tests, while the computer-based tests required only half to three-quarters of the testing required by the paper-and-pencil administered tests. Hence, when computerized tests are used, students can spend more time engaged in other instructional activities, and less time taking tests. Christine (2011)

Another advantage of computerized testing is the capability to present items in new, and potentially more realistic ways (Wise & Plake, 2009). A printed test has display limitations. While it can present text and line drawings with ease, it cannot provide timing of item presentation, variable sequencing of visual displays, animation or motion. The graphics and animation capabilities of computers provide the possibility of presenting more realistically simulated actions and dynamic events in testing situations. Assessment of science process or problem-solving skills, in particular, are areas where this type of application can be useful. Variables can be manipulated and the corresponding outcomes portrayed as they are measured. What results is a more accurate portrayal of situations that rely less heavily than conventional assessment procedures on verbal understanding. For example, the change in length of the shadow cast by a stick at various times of the day can be observed. (Wise & Plake, 2009)

On a physics test, instead of using a completely worded text or a series of static diagrams to present an item concerning motion, a high-resolution graphic can be used to depict more clearly the motion in question. This should represent a purer measure of the examinee's understanding of the motion concept because it is less confounded with other skills such as reading level. This implies a higher degree of validity for the computerized test item. Computer-animated tests such as this, may have special applications with students who have reading comprehension problems or difficulty translating words into images. Printed tests may therefore not provide an accurate measure of the true ability of the student. Christine (2011) The elimination of answer sheets in computer-administered tests can eliminate some traditional errors such as penciling in the answer to the wrong item number, failing to erase an answer completely, and inadvertently skipping an item in the test booklet but not on the answer sheet. By presenting only one item per screen, the computer automatically matches responses with the item number; examinees can also focus on one item at a time without being distracted, confused, or intimidated by the numerous items per page for paper tests. Computerized tests may therefore provide more accurate measures of performance for students who have lower reading ability, lower attention span, and higher distractibility. Moreover, convenient features for changing answers can replace time-consuming erasing on printed answer sheets. Christine (2011)

The administration of tests by computer also allows the collection of data about examinee response styles. These include information such as which items are skipped, how many answers are changed, and response latencies. The latter may refer to the time it takes an examinee to answer an item; analysis time for any complex drawing, graph, or table; reading time for each option; response selection time, or response speed. Precise measurement of any of these latencies is virtually impossible with paper-and-pencil tests. Christine (2011). other attractive features of computerized testing include more standardized test administration conditions and immediacy of score reporting. Within a few minutes after completing the test, the examinee or the test administrator can receive a score report and prescriptive profile. There are no paper copies of the tests or answer keys to be stolen, copied or otherwise misused. The computer-administered test can include multiple levels of password and security protection, to prevent unauthorized access to the testing materials, item banks or answer keys. Christine (2011)

DISADVANTAGES OF COMPUTERIZED TESTING

Despite the many advantages associated with computer-administered tests, potential problems exist as well. Use of the response entry device, whether keyboard, touch screen, or mouse can introduce errors. Pressing a wrong key in response to a question results in an error, and the validity of the individual's results is compromised. The amount of printed text that can be shown on a monitor screen can limit both the length of the question and possible responses. The need for multiple computer screens to read lengthy comprehension items might introduce a memory component into the construct being measured (Bunderson et al, 2009).

Another problem involves the time lag between an individual's answer to an item and the resulting response from the computer. Long time lags between responses can result in negative user attitudes, anxiety and poor performance. Another source of anxiety for individuals using a computer concerns their often mistaken perception that the system will require an inordinate amount of mathematical or computer skills to operate, or that the system can be easily harmed if an error is made by the user (Samson, 2003). Anxiety and the possible resulting negative impact on performance can occur as a result of poor system design or inaccurate user perceptions or both. A further shortcoming of computer-administered tests, especially in psycho-diagnostic assessment, concerns the use of norms in the interpretation of test scores. Most of the tests that are currently administered by computer were originally developed for a traditional paper-and-pencil approach. Differences in mode of administration may make paper-and-pencil norms inappropriate for computer- administered tests. (Samson, 2003)There are also measurement problems associated with the use of computer- administered tests. These are related to item types, item contamination that arises from certain test design strategies, and the non-equivalence of comparison groups in item analyses (Sarvela & Noonan, 2008). With regard to item type, difficulties arise when constructed-response items (such as fill- ins and short answers) as compared to selected-response items (for example multiple-choice, matching and true/false) are developed for the computer. It becomes almost impossible to program all the possible correct answers, when considering alternative correct answers, wording, spacing and spelling errors. A tremendous amount of programming is involved for even a partial subset of all possible correct answers. There are psychometric implications as well. Students could supply correct answers that simply are not recognized by the computer; the result could be lower reliability and poorer discrimination indices. Because of these reasons, computer-administered tests are mainly restricted to multiple-choice items. Christine (2011)

Another psychometric issue in computer-administered testing is the problem of item contamination if instructional design capabilities are incorporated. It is then possible to allow students to preview test items, receive feedback on the correctness of their answers while items are still being presented, or retake items which were drawn randomly from an item pool. In this situation, items which are dependent upon each other (for example, an item which requires the student to use the result from item 3 to compute item 4) would be contaminated if a student receives feedback after each item. Or, the correct answer for one item could provide subtle clues to the correct answer on another item. There are motivational concerns as well. If a student is consistently answering items incorrectly, the negative feedback might be detrimental to motivation on future items. Christine (2011). Likewise, a series of correct-answer feedbacks can promote greater motivation in future items. The problem is in the differential effects of item feedback across high and low achieving students. One other contamination problem results from the practice of selecting items randomly from an item bank for a particular test. There is a possibility that a student may see the same items on a second or third try. This problem is exacerbated when item feedback is given. If item feedback is provided, subsequent attempts at tests should contain new items. Christine (2011). Furthermore, when test items are drawn randomly from an item pool, for a given test different students may see different items or items presented in a different order. Consequently, there is non-equivalence of comparison groups. Unless the items administered to one student are equal in difficulty to items that are presented to another student, it becomes extremely difficult to compute item and test statistics (for example, total score, point bi-serial coefficient, estimate of reliability). The problem is that there is no sensible total score. With random item selection, a total test score is defensible for item analysis only if every item is of equal difficulty and equal discrimination. Christine (2011)

EFFECTS OF ADMINISTERING TESTS VIA COMPUTER SCORE EQUIVALENCE BETWEEN PAPER-AND-PENCIL AND COMPUTER-ADMINISTERED TESTS

When a conventional paper-and-pencil test is transferred to a computer for administration, the computer-administered version may appear to be an alternate form of the original paper-and-pencil test. However, the scores achieved with computer presentation may not necessarily be comparable to those obtained with the conventional format, and empirical verification is necessary before a claim of equivalent validity is justified. Even though the content of the items is the same, mode of presentation could make a difference in test-related behaviors, such as the propensity to guess, the facility with which earlier items can be reconsidered, and the ease and speed of responding (Greaud & Green, 2006). Duthie (2004, cited in Wilson, Genco, & Yager, 2005) has suggested that there may be cognitive differences in the manner in which a person approaches computer-administered and paper-and- pencil testing tasks. The manipulation necessary for working with a computer, and the stimulus value of the computer itself may alter the manner of cognitive functioning exhibited by the test-taker. Wood (2004) and Duthie have both noted that test performance may well be influenced by such seemingly minor differences as the formatting of a microcomputer screen display.

One way to look at the issue of empirical validation of an equivalent form of a test is from the point of parallel tests in classical test theory. Following from the definition of parallel tests, the subtest and total test scores for a paper-and- pencil test and its computer administered counterpart should yield equal means, equal variances, and equal correlations with the scores on any other criterion variable (Alex & Ben, 2009). If the scores from the computer- administered test version are intended to be interchangeable with scores obtained by the paper-and-pencil test, then the two test versions can be evaluated against the criteria for parallel tests. Green et al (2004) have suggested some possible ways in which the psychometric characteristics of tests might be altered when items are switched from paper-and-pencil to computer administration. First, there may be an overall mean shift resulting from a change in the difficulty of the test, with the items being easier or harder. Tests of speed performance in particular, where response time is a determining factor, would be expected to show an overall mean shift, because the time to respond depends critically on the nature of the response. Second, there could be an item-by-mode interaction. Some items might change, others might not, or some might become harder, others easier. This would be most likely to occur on tests with diagrams; the clarity of the diagrams might be different on the screen. Items with many lines of text, such as paragraph comprehension items, might also show this effect. Third, the nature of the test-taking task might change. For example, students who are more familiar with computers may perform somewhat better on the computer- administered version of the test than equally able students who are less familiar with computers. As a result, the test may unintentionally measure computer literacy along with the subject matter. Green et al (2004). Several factors influencing the equivalence and psychometric properties of tests from the two formats have been proposed. One variable that has been used to explain medium effects or differences on examinee scores is the differences in test-taking flexibility and amount of control (Spray, Ackerman, Reckase & Carlson, 2009). This refers to whether examinees are allowed to skip items and answer them later in the test, return to and review items already answered, and change answers to items. If computerized versions of tests do not provide these features and instead display individual items in a single-pass, no-return mode, then this may result in differences in item characteristics, such as the difficulty and discrimination indices. (Spray, Ackerman, Reckase & Carlson, 2009). Individual differences in test anxiety, computer anxiety and attitudes toward computerized testing, and amount of previous computer experience have also been hypothesized to affect the comparability of scores (Llabre et al, 2007). If these variables differentially affect examinee performance to a significant degree, then they may have implications for equity issues in testing. Other factors that have been suggested to affect the equivalence of scores include the difficulty of the test and the cognitive processes required by the test (Lee, Moreno, & Sympson, 2006), as well as test structure (discrete items versus sets of items based on a common reading passage or problem description), item content (items containing graphics versus items containing only verbal material), test timing (speed versus untimed tests), and item feedback on the test performance (Mazzeo & Harvey, 2008)

TEST ANXIETY STUDIES

Although the primary determinant of examinee responses to items on cognitive tests is knowledge or aptitude, other factors such as test anxiety have been shown to be related to test performance. Dusek (2000) defines test anxiety as "an unpleasant feeling or emotional state that has physiological and behavioral concomitants, and that is experienced in formal testing or other evaluative situations”. Test anxiety is a special case of general anxiety and has been conceptualized as a situation-specific anxiety trait. Two meanings of the term anxiety can be distinguished: anxiety as a state and anxiety as a trait. The state-trait model of anxiety set forth by Spielberger (2002) describes state and trait anxiety as follows:

State anxiety: (A-State) may be conceptualized as a transitory emotional state or conditions of the human organism that varies in intensity and fluctuates over time. This condition is characterized by subjective, consciously perceived feelings of tension and apprehension, and activation of the autonomic nervous system. Level of A-State should be high in circumstances that are perceived by an individual to be threatening, irrespective of the objective danger; A-State should be low in non-stressful situations, or in circumstances in which an existing danger is not perceived as threatening. Spielberger (2002)

Trait anxiety: (A-Trait) refers to relatively stable individual differences in anxiety proneness, that is, to differences in the disposition to perceive a wide range of stimulus situations as dangerous or threatening, and in the tendency to respond to such threats with A-State reactions. Spielberger (2002)

Although test situations are stressful and evoke state anxiety (A-State) reactions in most students, the magnitude of the A-State response will depend on the student's perception of a particular test as personally threatening. Individuals with high test anxiety generally perceive tests as more threatening than low test-anxious individuals and respond with greater elevations in state anxiety to the evaluative threat that is inherent in most test situations. Spielberger (2002) Correlational studies have shown that the performance of highly test-anxious persons on complex tasks is deleteriously affected by evolutional stressors. Individuals having high scores on measures of test anxiety tend to perform relatively poorly on ability and achievement tests, when compared with low anxiety scorers (Sarason, 2002). The generally accepted current explanation of the negative effects of test anxiety is that they result from ineffective cognitive strategies and attentional deficits that cause poor task performance in evaluative situations. Children with low anxiety level appear to become deeply involved in evaluative tasks but highly anxious children do not. Highly anxious children seem to experience attentional blocks, extreme concern with autonomic and emotional self-cues, and cognitive deficits such as misinterpretation of information. The highly anxious child's attentional and cognitive deficits are likely to interfere with both learning and responding in evaluative situations and result in lowered performance. Wine (2001) suggested an "attentional" interpretation of the debilitating effects of test anxiety. She contends that, during examinations, highly test-anxious individuals divide their attention between task requirements and task-irrelevant cognitive activities, such as worry. These worry cognitions distract students from task requirements and interfere with the effective use of their time, thereby contributing to performance decrements. Wine (2001)According to Wine, the highly test-anxious person responds to evaluative testing conditions with ruminative, self-evaluative worry, and thus, cannot direct adequate attention to task-relevant variables. Sex differences in test anxiety have also been consistently obtained, with females having higher levels of anxiety. Given the fact that research has provided evidence of a negative relationship between test anxiety and test performance, an important issue related to the use of computers in testing is whether computer-administered testing will increase test anxiety and depress test performance, particularly in examinees who are relatively unfamiliar with computers. The results of this study also showed no evidence of interactions between sex and state anxiety. Christine (2010)

A study by Ward era (2009), fifty college students were randomly assigned to take an Education class exam either on computer or in the traditional paper-and- pencil manner. Following testing, examinees were administered a questionnaire designed to measure their test anxiety and attitudes towards computerized testing. Results indicated no differences in test performance (p>0.35) but a significant difference in anxiety level (p<0.025) with those tested by computer having a higher anxiety level. The authors hypothesized that this increase in anxiety might be attributable to the novelty of the computer testing situation or the result of a fear of computers. The results also indicated a negative attitude towards computer testing with 75% of the computer tested group agreeing that computer testing was more difficult than traditional methods. Ward era (2009), Given the results reported in the preceding section, it appears that the added test anxiety associated with computer-administered tests is an important consideration in the evaluation of computerized testing. There is a need to familiarize examinees with the technology used in testing prior to test administration so that anxiety about computers does not increase examinee's level of test anxiety. Ward era (2009),

COMPUTER ANXIETY STUDIES

As noted previously, individual differences in computer anxiety has been hypothesized as a factor affecting the performance of an examinee on a computer-based test. This hypothesis rests on the assumption that examinees must feel comfortable with the computer and confident about their ability to work with a computer before being able to use the computer effectively to take a test. As anxiety towards using computers may influence the testing process, such an affective reaction may therefore be an important factor in whether computer-based testing becomes an accepted component of the evaluation of a school system.

Computer anxiety is generally perceived as a situational manifestation of a general anxiety construct, fitting into the category of anxiety state rather than anxiety trait. Raub (2001, cited in Cambre and Cook, 2005) defined computer anxiety as "the complex emotional reactions that are evoked in individuals who interpret computers as personally threatening." Simonson, Maurer, Montag-Torardi, & Whitaker (2007) described it as "the fear or apprehension felt by individuals when they used computers, or when they considered the possibility of computer utilization." Manifestations of computer anxiety may thus be triggered by consideration of the implications of utilizing computer technology by planning to interact with a computer, or by actually interacting with a computer. Factors such as gender and prior computer experience have been identified as being related to computer anxiety. A review of previous research reveals several studies designed to determine sex-related differences in computer anxiety and attitudes. While Loyd and Gressard (2004) found no difference in computer anxiety levels for males and females in a sample of high school and college students, Chen (2006) on the other hand, found significant sex-related differences, with high school males being less anxious and holding more positive attitudes of interest in and confidence with computers than did females. Differences in computer attitudes such as interest, liking and confidence were also obtained in investigations by Levin and Gordan (2009), and Popovich et al (2007) with males holding more positive attitudes. The amount of experience with computers is also a significant factor in computer anxiety because anxiety is produced in part, by a lack of familiarity with computer use. In fact, a major finding of the study by Levin and Gordan (2009) suggested that prior computer exposure has a stronger influence on attitudes than does gender. Students with little or no computer experience were significantly more anxious about computers than those with more experience. This finding is supported by Loyd and Gressard (2004) who found that although students' attitudes towards computers were not dependent on sex, they were affected by the amount of computer experience, with more experience related to decreased anxiety and increased positive attitudes. Manifestations of computer experience could be having access to a computer at home, participating in computer-related courses, playing computer games or knowing how to work with computers. Students who have a computer at home tend to have lower computer anxiety than those who do not. Boys are also more likely to have used computers more frequently at both home and school, as well as in informal settings (Chen, 2006). Perhaps because of this, they are often found to be less anxious about using computers and more self- confident about their abilities with computers. Since computer anxiety might negatively affect one's performance, this variable was hypothesized to exacerbate score differences between computer-administered and paper-and-pencil testing modes. Contrary to expectations, it was found that computer-anxious examinees did not have significant score differences between computer-based and conventional tests. A possible explanation of this unexpected finding may be that if the demands made on the examinee by the computerized mode of testing are not too complex and the tasks kept simple, any computer anxiety felt by the examinee may not lower test performance significantly. Earlier research by Denny (2006), however, showed anxiety to be related to poorer performance on a computerized test. As the results of the studies on the relationship between computer anxiety and test performance are mixed and inconclusive, further research in this area is warranted

EFFECTS OF COMPUTER EXPERIENCE ON COMPUTERIZED TEST PERFORMANCE

Another individual difference variable, the amount of previous computer experience has also been hypothesized to have an effect on computerized test performance. Inexperience and unfamiliarity with computers may increase anxiety and interfere with test-taking. If this were the case, then computerized testing may discriminate against examinees who have not worked with computers prior to testing. Those who have more past experience with computers would then be at an advantage when taking a computerized test. Thus individual differences in terms of past access to computers may be an important issue in computer-based testing. Christine (2010)Previous research has shown that the amount of computer experience can influence test performance on computer-based tests, with less experience being associated with lower test scores. Johnson and White used a between subjects design to compare computerized test scores of a sample of elderly subjects who had prior training on the computer with the scores of those who did not have prior training Christine (2010). They found that increased training on the computer prior to testing significantly enhanced the test scores of their examinees. The authors attributed the improvement in scores to the amelioration of anxiety by the training. Lee's study investigated the performance on a computerized arithmetic reasoning test with a sample of college undergraduates. While past computer experience was a significant factor affecting test performance, the findings showed that there was not a significant difference between "low experience" and "high experience" persons, indicating that minimal work with computers may be sufficient to prepare a person for computerized testing. Furthermore, it was also found that those whose computer experience involved computerized games only, performed significantly worse than the other two groups, indicating that computerized games did not provide the same training with computers as work tasks. Christine (2010) Contrary to the above findings, the results of three other studies showed that lack of experience with using computers did not have an adverse effect on examinee performance on a computer-based test. The subjects in the sample pool in these three separate studies by Eaves & Smith (2006), Plumly & Ray (2009), and Wise et al (2009) were all college students. There are some plausible reasons why contradictory findings were obtained.

First, age may play a part in the ability of examinees to respond equally to the two media used in the studies, namely computerized and traditional paper- and-pencil tests. It would seem reasonable to assume that college students would be more likely than elderly examinees to adapt to the novelty of using computers in testing. Second, the response demands placed on the subjects for the tests in the latter three studies might have been simple enough that an examinee with little or no prior computer experience would not be disadvantaged by the computerized test-taking procedures. Wise et al (2009)

EXAMINEES' REACTIONS TO COMPUTERIZED TESTING

To date, there has been little research regarding students' reactions to computerized testing. The research literature on attitudes toward computerized assessment has primarily focused on the reactions of examinees in the clinical and psycho-diagnostic realm. However, a few researchers have investigated the reactions of examinees toward aptitude and achievement tests. In these studies the reactions of the test-takers were generally favourable. Christine (2010) In the study by Gwinn & Beal (2008), 70% of the university students who took an anatomy and physiology test had a decided preference for computer testing over paper-and-pencil tests, about 7% disliked it, and the remainder found it made little difference. This sample of students had very little prior experience with the use of computers. A greater preference for online computer testing was also found by Moe and Johnson (2008) who investigated the reactions of Grade 8 to 12 students on a standardized aptitude test battery. Gwinn & Beal (2008) Overall reactions to the computerized test were overwhelmingly positive; 91% of the subjects indicated they would choose a computerized test. Nearly half of the students reported that they experienced no problems during the computerized test. Of those who did report trouble, the major difficulty was with the computer screen; 63% said their eyes got tired, 39% indicated that the screen was too bright, and 27.6% were disturbed by the glare on the screen. Most students (88.5%) however, said they had no difficulty with using the keys. When asked for the "worst things" about the test, the two most serious complaints were glare and the lack of opportunity to review answers. The most common response for the "best things" about the computerized test was the ease of answering. Other popular responses were that the test seemed faster, and that the computerized test was "fun". Moe and Johnson (2008)Fletcher and Collins (2006) conducted a survey on university students taking a Biology test for their relative preferences for the two forms of the tests. The most often cited criticisms about computerized tests were the inability to skip questions and answer them later, and the inability to review answers at the end of the test and make changes. Furthermore, with such constraints, examinees could not get hints for responses from other questions. Despite these criticisms, most of the respondents preferred the computer-administered test, and cited the immediacy of scoring, increased speed of test-taking and immediate feedback on incorrect answers as the major advantages. Fletcher and Collins (2006)While the above studies reported generally positive attitudes toward computerized tests, the college examinees in the study by Ward era (2009) exhibited a negative attitude toward computer-based testing. Seventy-five percent of the computer tested group agreed that computer testing was more difficult than traditional methods.

2.2 THEORETICAL FRAMEWORK

MOTIVATION THEORY

Motivation is a theoretical concept utilized to clarify human behaviour. The motivation provides the motive for the human beings to react and fulfil their needs. Motivation can also be defined as one's route lead to behaviour, or to the construct that trigger someone to desire to replicate behaviour and vice – versa , (A.H. Maslow 1943). Motivation is defined as the process to make a start, guides, and maintains goal-oriented behaviours. Basically, it leads individuals to take action to achieve a goal or to fulfil a need or expectation.

Motivation can be categorized as intrinsic motivation, extrinsic motivation and a motivation (J. Mitchell, M. Gagné,2012).

INTRINSIC AND EXTRINSIC MOTIVATION THEORY

Intrinsic motivation, according to M. Ryan and E. Deci (2000), is described as an operation performed solely for the satisfaction of oneself, with no external expectations. The main factors that elicit intrinsic motivation are challenge, interest, power, and fantasy. In order to stay motivated in school, you'll need a lot of willpower and a positive attitude. Furthermore, according to Pérez-López & Contero (2013), intrinsic motivation and academic achievement have a strong and optimistic relationship. Intrinsic motivation directs an individual to participate in academic activities only to experience the fun, challenging and uniqueness without any external pressure or compulsion rather than expecting external rewards, gifts or under any compulsion or pressure. Attitude in learning is considered prominent and it influences the academic achievement. Intrinsic motivation is able to spread the positivity and make the gained knowledge to sustain for a long time.

Extrinsic motivation, on the other hand, refers to external factors such as a reward, coercion, or punishment. Jabbari & Tohidi (2012). If a person is receiving a reward or is under some pressure or compulsion, they are extrinsically motivated. Tohidi & Jabbari (2012) claim that motivation can be cultivated extrinsically at first, then transformed into intrinsic motivation as the learning process progresses. This kind of motivation provides a high level of will power and engagement yet it would not able to sustain longer than the intrinsic motivation can do. If they are continuously motivated through the use of external rewards or compliments, it could be habitual for students to perform only to gain the rewards and not for own sake or to mastery skills or knowledge. Other than that, when an individual is not able to perform either intrinsically motivate or extrinsically motivate, then a motivation occurs. Both intrinsic and extrinsic motivation is needed in a learning process. Learning is a complicated process and motivation is the hard rock of this process. Hence, students have to be highly motivated to face the challenges, understand the process and able to apply in real circumstances. Intrinsic motivation leads to self-motivation in pursuing the learning meanwhile extrinsic motivation gives the purpose to pursue the learning.

HIERARCHY OF NEEDS THEORY

MMaslow's Hierarchy of Needs is a well-known motivation theory that is often used in educational settings. Maslow's hierarchy of needs is a psychological theory that explains human motivation in the fulfillment of various levels of needs. Humans are driven to meet their needs in a hierarchical order, according to the theory. Abraham Maslow first proposed Maslow's Hierarchy of Needs in his 1943 paper "A Theory of Human Motivation.". Maslow also studied the healthiest and highest achieving 1% of the college student population.As a result he developed the hierarchy of needs as an attempt to describe what people need in order to achieve a level of fulfillment from their lives or what Maslow describes as ‘self-actualization.

Abraham Maslow proposed that before students can reach their full potential, they must first fulfill a set of needs. It's worth noting that Maslow's theory was founded on ideology rather than empirical facts.Maslow's Hierarchy of Needs, on the other hand, should serve as a reminder to teachers that if our students' basic needs are not met, they are less likely to achieve their full potential. This list starts with the most basic needs and progresses to more advanced requirements. According to this theory, the ultimate aim is to achieve the fifth level of the hierarchy: self-actualization.

THE SOCIAL LEARNING THEORY

Social learning theory, proposed by Albert Bandura, emphasizes the importance of observing, modelling, and imitating the behaviors, attitudes, and emotional reactions of others. Social learning theory considers how both environmental and cognitive factors interact to influence human learning and behavior.

In social learning theory, Albert Bandura (1977) agrees with the behaviorist learning theories of classical conditioning and operant conditioning. However, he adds two important ideas:

1. Mediating processes occur between stimuli & responses.

2. Behavior is learned from the environment through the process of observational learning.

Children experience people in their environment behaving in a variety of ways. This is shown in the well-known Bobo doll experiment (Bandura, 1961). Models are people who have been studied. Children are surrounded by many prominent models in culture, such as their parents, characters on children's television, peers from their peer group, and teachers at school. These models show you how to analyze and mimic different types of behaviour, such as masculine and feminine, pro and anti-social, and so on.Children pay attention to some of these people (models) and encode their behavior. At a later time they may imitate (i.e. copy) the behavior they have observed.They may do this regardless of whether the behavior is ‘gender appropriate’ or not, but there are a number of processes that make it more likely that a child will reproduce the behavior that its society deems appropriate for its gender.

First, the child is more likely to attend to and imitate those people it perceives as similar to itself. Consequently, it is more likely to imitate behavior modeled by people of the same gender.

Second, the people around the child will respond to the behavior it imitates with either reinforcement or punishment. If a child imitates a model’s behavior and the consequences are rewarding, the child is likely to continue performing the behavior. If a parent sees a little girl consoling her teddy bear and says “what a kind girl you are,” this is rewarding for the child and makes it more likely that she will repeat the behavior. Her behavior has been reinforced (i.e., strengthened).External or internal reinforcement may be positive or negative. If a child seeks approval from parents or peers, this approval is external, but feeling good about being accepted is internal. Since it craves acceptance, a child will act in a way that it thinks will earn it. If the reinforcement presented externally does not meet an individual's needs, positive (or negative) reinforcement will have little effect. Reinforcement may be positive or negative, but the main point is that it typically results in a shift in actions.Third, the child will also take into account of what happens to other people when deciding whether or not to copy someone’s actions. A person learns by observing the consequences of another person’s (i.e., models) behavior, e.g., a younger sister observing an older sister being rewarded for a particular behavior is more likely to repeat that behavior herself. This is known as vicarious reinforcement.

This relates to an attachment to specific models that possess qualities seen as rewarding. Children will have a number of models with whom they identify. These may be people in their immediate world, such as parents or older siblings, or could be fantasy characters or people in the media. The motivation to identify with a particular model is that they have a quality which the individual would like to possess.

2.3 SUMMARY

In this review the researcher has sampled the opinions and views of several authors and scholars on an investigation on the usage of computer based test on the performance of secondary schools students in Nigeria.

The works of scholars who conducted theoretical studies have been reviewed also. The chapter has made clear the relevant literature.