MultiSet
Developer PortalContact UsTutorials
  • MultiSet Developer Docs
  • Getting Started
    • Changelog
    • VPS SDK : Unity Guide
    • FAQ
  • Basics
    • Maps
      • Mapping Instruction
      • Mapping Planning
      • Mapping Equipment
    • MapSet : Multiple Maps
      • Merging Maps without Overlap
        • Extend a MapSet
      • Merging Maps with Overlap
      • Adjust Map Transformation
    • App Localization
    • Credentials
    • Analytics and Usage
    • Downloads
    • REST API Docs
    • WebXR Integration
    • Third Party Scans
      • Matterport
      • Leica Scans
    • MapFoundry
    • Georeferencing Maps
    • On-Premises Localization
  • Unity-SDK
    • Authentication
    • MultiSet package import
      • Universal 3D (Core) support
    • Sample Scenes
    • On-Cloud Localization
      • Individual Map
      • MapSet (Multiple maps)
        • Hint MapCodes
      • Pose Prior : HintPosition
    • Occlusion
    • NavMesh Navigation
  • Support
  • Native Support
    • iOS Native
    • Android Native
Powered by GitBook
On this page
  • Localization
  • Single Frame Localization
  • Navigation
  • Training

Was this helpful?

  1. Unity-SDK

Sample Scenes

PreviousUniversal 3D (Core) supportNextOn-Cloud Localization

Last updated 8 days ago

Was this helpful?

Below are the sample scene part of MultiSet Unity SDK

Localization

Localization scene capture multiple camera frames and other sensor data in the localization API request to determine the devices' accurate pose. Since this method processes multiple frames, the localization time may take up to approximately 5 seconds. However, it provides an accurate pose even in challenging conditions where there are constant minor changes in the environment.

Two main parameters to change in this scene are the Number of Frames and the Capture interval between frames, higher number of frames results in better pose accuracy but will increase localization response time.

Single Frame Localization

Single Frame Localization scene is a fundamental demonstration of how the platform performs localization within a mapped environment. This scene provides developers with a clear visualization of the SDK's core components. It showcases essential SDK prefabs, including the MultiSet SDK Manager which handles authentication, and the Map Localization Manager which manages the localization process.

This scene uses a single camera frame to localize against the map, ideal for situations where you want a quick response from localization API (~3 seconds) and don't require precise accuracy

Auto Localize: Automatic start localization at the start of the AR session

Relocalization: Trigger Localization request when AR session tracking is lost/limited; also trigger localization when the app comes from background.

Confidence Check: Enable this option to add a filter on localization response, higher confidence value will result in better accuracy but may reduce the number of successful localization attempts.

Confidence Threshold: if Confidence Check if enabled this

Navigation

The Navigation Scene provides a template for building an AR navigation app using the MultiSet SDK. This scene includes pre-configured scripts and UI elements to handle Navigation Points of Interest (POIs), and utilizes Unity's NavMesh package for path detection and pathfinding.

Getting Started:

  1. Open the Navigation sample scene and install the Unity Navigation package (com.unity.ai.navigation)

  2. Replace the default Sample Map with your map Go to Map Space > Delete the sample Map > Add your own Map

  3. Goto Map Space > NavigationContent > NavMesh and then Bake the surface to create paths

  4. Create and configure POIs (Points of Interest) within your map for user navigation

  5. Test your scene using the Editor simulator mode

  6. Build and deploy to Android or iOS devices

Check out the tutorial below explaining the process end-to-end

Training

The Training Scene provides a template for building AR location-guided training using MultiSet Unity SDK. This scene includes pre-configured scripts and UI elements to handle Training steps and navigation between steps in physical spaces, It utilizes Unity's NavMesh package for path detection and pathfinding.

Getting Started:

  1. Open the Training sample scene and install the Unity Navigation package (com.unity.ai.navigation)

  2. Replace the default Sample Map with your map Go to Map Space > Delete the sample Map > Add your own Map

  3. Goto Map Space > NavigationContent > NavMesh and then Bake the surface to create paths

  4. Create and configure POIs (Points of Interest) within your map for user navigation (Where uses need to visit to perform the tasks)

  5. Create a Training sequence and add steps under it.

  6. Connect each step with its relevant POI for navigation to the task's location

  7. Test your scene using the Editor simulator mode

  8. Build and deploy to Android or iOS devices