The OPI is a face-to-face or telephonic interview that consists of three phases: a warm-up phase, a series of level checks and probes, and a wind-down phase. This is one of the most widely accepted tests for speaking ability and is used by government agencies (The Defense Language Institute, The Peace Corps), testing institutions (Educational Testing Service) and the Federal Interagency Language Roundtable.
There are many advantages to the OPI system of testing. It is easy, quick and apparently accurately forecasts the degree to which a foreign-speaker will be able to communicate in English. Unlike written tests, it actually tests English speaking ability which, as with all languages, is completely separate from the ability to read and write. The test can be performed quickly and the tester can interview multiple people in a single session. This is particularly importance within the context in which this test is often given.
Thus graduate students from foreign countries are often given the test before they can perform grading and/or teaching duties in American universities. Resources for such testing are limited, and so the ability for one tester to perform multiple tests in one day is vital.
There are, however, detractors who ...
Basically, Messick suggests that the OPI tests do not actually represent real-life conversations.
Part of the problem with OPI tests are that they do not really reflect the sheer range of speaking that occurs in actual life. Thus there is monologic speaking (one person), dialogue speaking (two people) or multiple speakers, such as in a meeting with several colleagues. The OPI tests tend to test only one of these: the dialogue. As Brown (2003) and Bonk (2003) have suggested, some speakers do better with dialogue and some with discussion activities. A test that tests one over the other is bound to be somewhat limited in its scope. Another basic problem with this type of test (although it may in fact be shared with all speaking tests) is the variability of the interviewer and his/her affect upon the test results. Each interviewer will have a unique speech style, pattern and intonation that may help (or hinder) the interviewee (Brown, 2003). Thus the test result may be seen as a co-score reached by both the tester and the subject, rather than an accurate measure of the non-native speaker's communication prowess.
This tendency may be countered by careful training of the tester and the equally careful process of self-evaluation and objective supervision which must occur. Within one center periodic test interviews can be undertaken in which the same candidate is tested by all the testers (with suitable renumeration of course) and the tests and then compared. If test results vary too much from the mean then some additional training etc, is perhaps needed.
As McNamara (1997, 2002) suggests, the more educated, skillful and eloquent the