SDK for Android Developer's Guide

Adding and Interacting with LiveSight Content

This section covers how to add content to be displayed in LiveSight and how to handle user interactions with that content. The classes covered in this section are ARObject and ARIconObject. Additionally, several ARController methods are used:

  • addARObject(ARObject)
  • removeARObject(ARObject)
  • addOnTapListener(OnTapListener)
  • press(PointF)
  • focus(ARObject)
  • defocus()
  • getObjects(PointF)
  • getObjects(ViewRect)

The LiveSight Object Model

A LiveSight object has several visual representations. The representation to be displayed is determined based on the current state of the LiveSight view, which is defined as a function of the device pitch by default, as well as the state of the object itself. The object state that influences display is the Focus state. The Focus state is discussed in detail later.

The LiveSight view states are the "Down" state and the "Up" state. By default, when the view is pitched downwards (for example, if the device screen is face-up), then LiveSight is in the Down state and set to the default Map view. As the device is pitched further upwards (angled negatively around the x-axis), LiveSight transitions to the Up state, which has the Camera view as its default view.

  • Down Object Representation

    While in the Down state, a LiveSight object is represented by an icon associated with the Down state. When transitioning to the Up state, a fly-in transition animation occurs, and the larger front icon is displayed.

    Figure 1. An Icon in the Down State
  • Up Object Representation

    While the Down state representation consists of just a single icon, the Up state representation is more complex. While in the Up state, there are two planes where an object can be displayed, and the object representation is different in each. The planes are the "Front" plane and the "Back" plane. By default, objects that are geographically closer to the LiveSight center are displayed in the Front plane, and objects that are further away are displayed in the Back plane. Objects can be moved from one plane to the other using the vertical pan gesture.

    Figure 2. Icons in the Up State
    Figure 3. Visualization of the Front and Back Planes

    While in the Front plane, a LiveSight object is represented by both its icon and an information view (Info View) that extends out from the side of the icon. This information view is intended to act as a mechanism for displaying more detailed information about the object. In the Back plane an object is initially represented by a single icon. It is possible to have an object in the Back plane display its Info View by putting it in focus. The icon for the Front plane and the Back plane can be different, and by default, the transition from one plane to the other is animated.

ARObject Abstract Class

ARObject is the base implementation for all other objects that can be added to LiveSight in order to be displayed. It contains methods common to all LiveSight objects enabling the following operations:

  • Set and retrieve the object's current position
  • Set and retrieve the object's down, front, or back icon
  • Set and retrieve the object's down, front, and back icon sizes
  • Set and retrieve the size, image, and extension state of the information view
  • Set and clear an image texture on the object's down, front, back icons and information view

ARIconObject Class

Currently the single concrete ARObject is the ARIconObject. ARIconObject represents the object model described in The LiveSight Object Model. Because it is the only concrete ARObject, all of its functions reside in the ARObject.

Adding and Interacting with ARObjects

Adding an ARObject to LiveSight is accomplished with ARController.addARObject(ARObject) method:

arIconObject = new ARIconObject(
  new GeoCoordinate(49.276744, -123.112049, 2.0), view, image);
arController.addARObject(arIconObject);

Similarly, ARObjects can be removed using ARController.removeObject(ARObject) method:

boolean success = arController.removeARObject(arIconObject);

To facilitate interactivity with ARObjects, an ARController.OnTapListener can be registered with ARController.addOnTapListener(OnTapListener) method. When a tap event occurs, the ARObject at the tap point can be found through press(PointF) method. Calling press(PointF) also causes an animation on the ARObject. Additionally, the ARObject can be put into focus with ARController.focus(ARObject) method. While in focus, an ARObject that is in the back plane displays its info pane. Only one ARObject may have focus at a time.

arController.addOnTapListener(new ARController.OnTapListener() {

  @Override
  public boolean onTap(PointF point) {

    // retrieve ARObject at point (if one exists)
    // and trigger press animation
    ARObject arObject = arController.press(point);

    if (arObject != null) {
      // focus object
      arController.focus(arObject);
    }

    return false;
  }
});

To defocus an ARObject, call focus() on another ARObject. You can also call the ARController.defocus() method to defocus from the currently focused ARObject.

In addition to event driven ARObject retrieval, ARController.getObjects(PointF) and ARController.getObjects(ViewRect) methods can be used to programmatically get ARObject at a screen location:

PointF point = new PointF(50, 50);
List<ARObject> objectsAtPoint = arController.getObjects(point);

ViewRect viewRect = new ViewRect(50, 50, 25, 25);
List<ARObject> objectsInViewRect = arController.getObjects(viewRect);

Selecting ARObjects

After retrieving an ARObject, you can choose to select and unselect it by calling the select() and unselect() methods. Selecting an object causes objects to change their properties (such as size and opacity) according to ARObject.SelectedItemParams. Only one ARObject may be selected at a time.

Note: A single ARObject cannot be focused and selected simultaneously. However, it is possible to have one ARObject focused and another ARObject selected at the same time.

ARObject Occlusion

Figure 4. Occluded Objects in the Map View

Occlusion refers to whether a certain ARObject is behind a building, with respect to the user's point of view. If the point represented by the ARObject is not visible in real life because it is blocked by a building, then that point is considered occluded. Because occlusion is dependent on the user's point of view, this feature is dependent on having accurate building data for the user's location.

With the ARController you can check for occluded LiveSight objects and change their opacity by using the following methods:

  • ARController.isOccluded(ARObject arObject)
  • ARController.setOcclusionOpacity(float opacity)
  • ARController.setOcclusionEnabled(boolean enable)