Papers by Curry Guinn
An architecture for voice dialogue machines is described with emphasis on the problem solving and high level decision making mechanisms. The architecture provides facilities for generating voice interactions aimed at cooperative human-machine problem solving. It assumes that the dialogue will consist of a series of local self-consistent subdialogues each aimed at subgoals related to the overall task. The discourse may consist of a set of such subdialogues with jumps from one subdialogue to the other in a search for a successful conclusion. The architecture maintains a user model to assure that interactions properly account for the level of competence of the user, and it includes an ability for the machine to take the initiative or yield the initiative to the user. It uses expectation from the dialogue processor to aid in the correction of errors from the speech recognizer.
Tutorial dialogue offers several interesting challenges to mixed-initiative dialogue systems. In this paper, we outline some distinctions between tutorial dialogues and the more familiar task-oriented dialogues, and how these differences might impact our ideas of focus and initiative. In order to ground discussion, we describe our current dialogue system, the Duke Programming Tutor. Through this system, we present a temperature-based model and algorithm which provide a basis for making decisions about dialogue focus and initiative.
Technological advances in
areas such as transportation, communications, and science are rapidly changing
our world--the rate of change will only increase in the 21st century.
Innovations in training will be needed to meet these new requirements. Not only
must soldiers and workers become proficient in using these new technologies,
but shrinking manpower requires more cross-training, self-paced training, and
distance learning. Two key technologies that can help reduce the burden on
instructors and increase the efficiency and independence of trainees are
virtual reality simulators and natural language processing. This paper focuses
on the design of a virtual reality trainer that uses a spoken natural language
interface with the trainee.
recognition on a Pentium-based PC,
recognition on a Pentium-based PC,
We describe the Virtual Standardized Patient (VSP) application, having who interacts with medical practitioners in much the same way as actors hired to teach and evaluate patient assessment and interviewing skills. The VSP integrates technologies from two successful research projects conducted at Research Triangle Institute (RTI provides natural language processing, emotion and behavior modeling, and composite facial expression and lip-shape modeling for a natural patient-practitioner dialogue. Trauma Patient Simulator (TPS) provides case-based patient history and trauma casualty data, real-time physiological modeling, interactive patient assessment, 3-D scenario simulation, and instructional record-keeping capabilities. The VSP offers training benefits that include enhanced adaptability, availability, and assessment.
Research on survey non-response suggests that advanced communication and listening skills are among the best strategies telephone interviewers can employ for obtaining survey participation, allowing them to identify and address respondents' concerns immediately with appropriate, tailored language. Yet, training on interaction skills is typically insufficient, relying on role-playing or passive learning through lecture and videos. What is required is repetitive, structured practice in a realistic work environment. This research examines acceptance by trainees of an application based on responsive virtual human technology (RVHT) as a tool for teaching refusal avoidance skills to telephone interviewers. The application tested here allows interviewers to practice confronting common objections offered by reluctant sample members. Trainee acceptance of the training tool as a realistic simulation of "real life" interviewing situations is the first phase in evaluating the overall effectiveness of the RVHT approach. Data were gathered from two sources -- structured debrief questionnaires administered to users of the application, and observations of users by researchers and instructors. The application was tested with a group of approximately fifty telephone interviewers of varying skill and experience levels. The research presents findings from these acceptance evaluations and discusses users' experiences with and perceived effectiveness of the virtual training tool.
In this paper, we describe an application of responsive virtual humans to train law enforcement personnel in dealing with subjects that present symptoms of serious mental illness. JUST-TALK provides a computerized virtual person to interact with the student in a role-playing environment. Students were able to converse with the virtual person using spoken natural language and see and hear the virtual human
a combination of facial gesture, body movements, and spoken language. The JUST-TALK project, funded by the National Institute of Justice Office of Science and Technology and developed by RTI International, involved integrating virtual reality training software within a 3-day class at the North Carolina Justice Academy. The course was structured to include classroom-based lecture, videos, discussion, live human role-playing, and virtual human role-playing.