MultiSet
Developer PortalContact UsTutorials
  • MultiSet Developer Docs
  • Getting Started
    • Changelog
    • VPS SDK : Unity Guide
    • FAQ
  • Basics
    • Maps
      • Mapping Instruction
      • Mapping Planning
      • Mapping Equipment
    • MapSet : Multiple Maps
      • Merging Maps without Overlap
        • Extend a MapSet
      • Merging Maps with Overlap
      • Adjust Map Transformation
    • App Localization
    • Credentials
    • Analytics and Usage
    • Downloads
    • REST API Docs
    • WebXR Integration
    • Third Party Scans
      • Matterport
      • Leica Scans
    • MapFoundry
    • Georeferencing Maps
    • On-Premises Localization
  • Unity-SDK
    • Authentication
    • MultiSet package import
      • Universal 3D (Core) support
    • Sample Scenes
    • On-Cloud Localization
      • Individual Map
      • MapSet (Multiple maps)
        • Hint MapCodes
      • Pose Prior : HintPosition
    • Occlusion
    • NavMesh Navigation
  • Support
  • Native Support
    • iOS Native
    • Android Native
Powered by GitBook
On this page
  • 1. Capturing Camera Intrinsics and Camera Image
  • 2. Making API Calls for Localization Query
  • 3. Handling API Response and Applying Transformations

Was this helpful?

  1. Basics

WebXR Integration

Integrate MultiSet VPS localization with your Web apps

PreviousREST API DocsNextThird Party Scans

Last updated 6 months ago

Was this helpful?

To try WebAR sample localization, open the developer portal in Chrome browser for Android navigate to the Maps sections and click on the AR button

1. Capturing Camera Intrinsics and Camera Image

Camera Intrinsics

  • Objective: Obtain the camera's intrinsic parameters—focal lengths (fx, fy) and principal points (px, py).

  • Method:

    • Use the projectionMatrix from the XR view to calculate intrinsics.

    • Compute principal points using elements p[8] and p[9] of the projection matrix, adjusted by the viewport dimensions.

    • Calculate focal lengths using p[0] and p[5], scaled by half the viewport width and height.

const getCameraIntrinsics = (projectionMatrix: Float32Array, viewport: XRViewport) => {
    const p = projectionMatrix;

    // Principal point in pixels (typically at or near the center of the viewport)
    const u0 = ((1 - p[8]) * viewport.width) / 2 + viewport.x;
    const v0 = ((1 - p[9]) * viewport.height) / 2 + viewport.y;

    // Focal lengths in pixels (these are equal for square pixels)
    const ax = (viewport.width / 2) * p[0];
    const ay = (viewport.height / 2) * p[5];

    return {
        fx: ax,
        fy: ay,
        px: u0,
        py: v0,
    };
}

Capturing the Camera Image

  • Objective: Capture the current camera frame as an image for processing.

  • Method:

    • Within the XR session, use XRWebGLBinding to access the camera image texture.

    • Convert the WebGL texture to an image by:

      • Creating a framebuffer and attaching the texture.

      • Reading pixel data from the framebuffer.

      • Flipping the image vertically to correct orientation.

      • Drawing the pixel data onto a canvas.

      • Converting the canvas content to a JPEG data URL.


2. Making API Calls for Localization Query

  • Objective: Send the captured image and camera intrinsics to a server to obtain localization data.

  • Method:

    • Create a FormData object containing:

      • Camera intrinsics (fx, fy, px, py).

      • Image dimensions (width, height).

      • The image as a JPEG blob (queryImage).

      • The map identifier (mapId) and coordinate system flag (isRightHanded = true).

    • Use a function like queryAPI(formData) to send a POST request to the server with the form data.

    • Handle the server's response, which should include localization data such as position and rotation.


3. Handling API Response and Applying Transformations

  • Objective: Update the AR scene based on the localization data received from the server.

  • Method:

    • Parse the API response to extract position and rotation data.

    • Create Three.js vectors and quaternions from the response data.

    • Construct transformation matrices to apply the position and rotation to the scene objects.

    • Adjust for any coordinate system differences (e.g., We flip it around the z axis).

    • Apply the calculated transformations to the parent and child objects in the scene hierarchy:

      • Child Object: Set its position and apply rotation adjustments.

      • Parent Object: Apply rotations to align the child correctly within the scene.

      • Grandparent Object: Adjust based on the camera's current position and orientation to correct any errors.

    • Update the scene to make the virtual content visible and correctly aligned with the real world.

WebXR VPS demo