Beschreibung

vor 14 Jahren
The increasing availability of displays at lower costs has led to a
proliferation of such in our everyday lives. Additionally, mobile
devices are ready to hand and have been proposed as interaction
devices for external screens. However, only their input mechanism
was taken into account without considering three additional factors
in environments hosting several displays: first, a connection needs
to be established to the desired target display (modality). Second,
screens in the environment may be re-arranged (flexibility). And
third, displays may be out of the user’s reach (distance). In our
research we aim to overcome the problems resulting from these
characteristics. The overall goal is a new interaction model that
allows for (1) a non-modal connection mechanism for impromptu use
on various displays in the environment, (2) interaction on and
across displays in highly flexible environments, and (3)
interacting at variable distances. In this work we propose a new
interaction model called through the display interaction which
enables users to interact with remote content on their personal
device in an absolute and direct fashion. To gain a better
understanding of the effects of the additional characteristics, we
implemented two prototypes each of which investigates a different
distance to the target display: LucidDisplay allows users to place
their mobile device directly on top of a larger external screen.
MobileVue on the other hand enables users to interact with an
external screen at a distance. In each of these prototypes we
analyzed their effects on the remaining two criteria – namely the
modality of the connection mechanism as well as the flexibility of
the environment. With the findings gained in this initial phase we
designed Shoot & Copy, a system that allows the detection of
screens purely based on their visual content. Users aim their
personal device’s camera at the target display which then appears
in live video shown in the viewfinder. To select an item, users
take a picture which is analyzed to determine the targeted region.
We further extended this approach to multiple displays by using a
centralized component serving as gateway to the display
environment. In Tap & Drop we refined this prototype to support
real-time feedback. Instead of taking pictures, users can now aim
their mobile device at the display resulting and start interacting
immediately. In doing so, we broke the rigid sequential interaction
of content selection and content manipulation. Both prototypes
allow for (1) connections in a non-modal way (i.e., aim at the
display and start interacting with it) from the user’s point of
view and (2) fully flexible environments (i.e., the mobile device
tracks itself with respect to displays in the environment).
However, the wide-angle lenses and thus greater field of views of
current mobile devices still do not allow for variable distances.
In Touch Projector, we overcome this limitation by introducing
zooming in combination with temporarily freezing the video image.
Based on our extensions to taxonomy of mobile device interaction on
external displays, we created a refined model of interacting
through the display for mobile use. It enables users to interact
impromptu without explicitly establishing a connection to the
target display (non-modal). As the mobile device tracks itself with
respect to displays in the environment, the model further allows
for full flexibility of the environment (i.e., displays can be
re-arranged without affecting on the interaction). And above all,
users can interact with external displays regardless of their
actual size at variable distances without any loss of accuracy.

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15
:
: