Beschreibung

vor 17 Jahren
Recent technological advances allow for building real-time,
inter-active multi-modal dialog systems for a wide variety of
applications ranging from information systems to communication
systems interacting with back-end services. To retrieve or update
information from various information systems the user has to
interact among other man-machine-interfaces (simultaneously) with
speech dialog systems. This will inevitably lead a situation where
a user has to interact with multiple speech dialog systems within a
single thread of activity. Exposing the users to such an
environment with diverse speech interfaces will result in increased
cognitive load and thus bad usability. An integrated speech enabled
access layer to all available information from different
applications would allow the user to access information more
efficiently and easily. This dissertation proposes a novel approach
to build such an integrated speech user interface to different
applications by combining the existing speech user interfaces of
different applications automatically or semi-automatically. By
analyzing the dialog specifications of different applications,
functional and semantic overlaps between the applications are
recognized. The overlaps are solved successfully in the level of
dialog specification so that the integrated speech user interface
provides transparent access to different applications, solves the
problems of task sharing and enables information sharing among
different applications.

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15
:
: