Tasks, Frames, and Interaction in
Language Proficiency Interviews and their Alternatives

S. J. Ross
University of Maryland

Abstract

The oral proficiency interview has been in use for more than fifty years, and remained the assessment tool of choice in US Government language testing. The canonical version of the OPI is a set of unscripted tasks posed by a trained interviewer and rated independently by a third party. The OPI has evolved over the decades to offer a menu of tasks available for interviewers to adaptively select for any given candidate. OPI tasks focus on various facets of speaking proficiency, such as conversation about familiar topics, narrations, instruction tasks, reporting of events, descriptions of persons, places, or objects, and various role plays. The performance on the various tasks provides evidence of a candidate’s interaction competence, fluency, accuracy, lexical range, coherence, and pragma-linguistic and socio-pragmatic abilities. The OPI as an assessment technique is applicable to any of the seventy languages taught and tested by the agencies that use it. There have been critical appraisals of the OPI over the years, centered on the authenticity of the tasks and early claims that the interview assessed authentic conversational ability. The presentation will review these claims, and focus on the language elicited by different OPI tasks, how the tasks are framed, and how the task framing is crucial for candidates to perform optimally. How interaction relates to the assessment of proficiency, both in non-rated portions of the interview and on the main assessment tasks, will also be addressed. The potential expansion of the focal content of interviews, currently summarized in the functional tri-section, and practically recognizable to interviewers and raters as fluency, accuracy, and coherence, will be considered. Specifically, the call for the inclusion of interaction competence as an independently ratable facet of proficiency will be examined with prospects for its inclusion and possible constraints reviewed. The presentation will examine the assertion that proficiency in interviews is essentially co-constructed, and will do so by discussing a case study of an individual who had been assessed seven times with seven different interviewers and raters. Evidence of co-construction will be the focus of this portion of the presentation, with emphasis of what aspects of interaction could be influenced by an interlocutor, and what remains consistently generated independently by the candidate. Finally, the recent emergence of speech recognition technology will be considered, with both the potential benefits of non-interactive speech testing and automated scoring and the likely trade-offs when human interaction and interpretation are removed from the assessment process.

Download abstract