FAQ
FAQ's around MultiSet Developer platform
General
Q: What does MultiSet AI provide? A: MultiSet AI is an enterprise spatial computing platform that gives cameras, headsets, mobile apps, and robots precise 6-DoF localization to align AR content and operations to the real world. It powers navigation, inspection, training, and digital-twin overlays across complex indoor/outdoor environments.
Q: How is MultiSet different from other VPS or AR toolkits? A: Unlike marker/beacon systems or single-vendor solutions, MultiSet is scan-agnostic and cross-platform. It accepts data from various reality-capture tools (E57, point clouds, meshes), deploys across iOS/Android/headsets/robots, and supports multiple development frameworks—without vendor lock-in or hardware requirements.
Q: Is the MultiSet platform based on Google Cloud Anchors or Apple World Map? A: No. MultiSet built its VPS technology from the ground up, enabling it to scale to thousands of square feet and remain agnostic to device form factor and underlying platform.
Q: Which devices are supported? A: Any modern phone, tablet, AR headset, robot, or drone equipped with a standard RGB camera. LiDAR-equipped devices gain extra accuracy but aren't required.
Q: What SDKs are available? A: Unity, native Android, native iOS, WebXR, Meta Quest, and ROS 2 SDKs are available. Custom viewports can be built for any camera-based device on request.
Q: How do I get started? A: Download an SDK, import sample scenes, or book a demo. Most teams deploy and localize their first device in under 10 minutes.
Q: What are the costs of using MultiSet? A: A free sandbox tier supports prototyping. Production pricing scales by maps, cumulative area, and API calls. Custom SLAs are available for private deployments.
3D Mapping
Q: How can I map a space? A: Use the MultiSet app on your iPhone Pro or iPad Pro to scan the environment, or import an existing scan into the platform.
Q: What scanning formats does MultiSet accept? A: E57 files from providers such as Matterport, Leica, NavVis, XGrids, and Faro. We also support Matterport MatterPak files and LiDAR-generated meshes.
Q: How large a map can MultiSet handle? A: The MultiSet app can capture up to 5,000 sq ft (~465 m²) in a single session. Larger areas can be broken into multiple sessions and merged later. For imports, a single E57 file can be as large as 200,000 sq ft (~18,580 m²), and multiple files can be merged using MapSet.
Q: How does MapSet enable large-scale coverage? A: MapSet stitches many smaller scans into one seamless "mega-map." It preserves the detail of each fragment while providing a single coordinate system for the entire venue.
Q: How much overlap should adjacent maps have? A: If using automatic overlap-based merging, approximately 15-20% visual overlap between neighboring maps allows sufficient shared features for precise alignment. However, overlap is not required—maps without overlap can be merged manually in the app or through the developer portal.
Q: Can I update one area without re-mapping the whole venue? A: Yes. Rescan the affected zone and upload it. MapSet realigns it automatically while keeping the rest of the map online.
Q: How do I geo-reference a map? A: Record WGS-84 latitude, longitude, altitude, and compass heading for your origin point, then enter the values in the project's Geo Reference panel.
Q: Can I export maps to other spatial tools? A: Yes. Maps can be exported as raw or textured GLB files for use in BIM, game engines, or analytics platforms.
Visual Positioning System (VPS)
Q: What is VPS and why do enterprises need it? A: A VPS gives devices precise 6-DoF position and orientation in the real world so AR content can align to physical assets exactly—indoors, outdoors, and at scale. It surpasses GPS, Wi-Fi, and beacons in accuracy. Robots and AMRs need VPS for precise navigation and task execution in dynamic environments. AI glasses rely on VPS to anchor contextual information and instructions to real-world locations for hands-free workflows.
Q: How accurate is the VPS? A: MultiSet delivers state-of-the-art localization with median positional error of about 6 cm. Drift is held to <1 cm at ranges under 10 m and <6 cm at 100 m.
Q: How does MultiSet VPS handle lighting and environmental changes? A: MultiSet's neural networks are trained on diverse lighting conditions and dynamic environments. The VPS remains robust under typical lighting variations, minor physical changes, and the presence of people.
Q: What hardware do I need to use MultiSet VPS? A: Any modern phone, tablet, AR headset, robot, or drone equipped with a standard RGB camera works. LiDAR-equipped devices gain extra accuracy but aren't required.
Q: How do I integrate VPS into my applications? A: Access it through our REST API, Unity SDK, or native SDKs for iOS and Android. We also support on-premises deployments and offline SDKs.
Q: Can I speed up localization with GPS or UWB? A: Yes. Passing HintPosition coordinates from GPS or UWB narrows the search space, localizes up to 15% faster, and reduces on-device CPU/memory load for large maps.
Q: Does MultiSet work indoors, outdoors, and across multiple floors? A: Yes. With adaptive exposure control, maps can span factory floors, loading docks, and outdoor areas while supporting seamless multi-floor transitions.
Q: Does MultiSet support indoor navigation and wayfinding? A: Yes. The platform utilizes Unity NavMesh for pathing plus map stitching to move between areas without re-localization across multi-floor facilities.
Q: How does MultiSet manage content: anchors vs. embedded in the map? A: Content is placed with respect to the map's global coordinates. If the map is updated or re-scanned, content placement may require an offset adjustment to maintain alignment.
Q: Are there any VPS limitations I should be aware of? A: Vision-based systems may fail with drastic scene changes (e.g., structural removal). Plan maintenance for map refresh and use stitching for expansive spaces. Fallback hints like QR codes can help in challenging areas.
Q: Can I deploy without app installs (WebXR/App Clips)? A: Yes. The system supports WebXR endpoints plus iOS App Clips and Android Instant Apps for frictionless AR trials.
Object Tracking
Q: What does "markerless object tracking" mean? A: No stickers, QR codes, or fiducials are needed. The system uses the object's geometry and texture to recognize and track it in 3D.
Q: What's an "object anchor"? A: An anchor represents a pose (position and orientation) that remains aligned to the real world as tracking updates occur. Object anchors specifically tie this pose to a recognized object rather than an arbitrary location.
Q: How does Object Tracking function? A: Upload a textured 3D model to MultiSet's cloud, which generates an optimized tracking map. The device SDK then recognizes the object and maintains its pose locally for responsive, low-latency AR.
Q: Do I need textures if my source is CAD? A: Yes. CAD models are typically untextured. Convert to a polygonal mesh and apply realistic texture so the tracker has visual features to lock onto.
Q: Which objects track best? A: Asymmetrical objects with rich features (logos, panels, labels, varied geometry) perform best. Highly symmetrical or feature-poor/reflective items are challenging.
Q: Which file formats are supported for object tracking? A: Export as .glb or .gltf formats. CAD and scans should be converted to polygonal GLB/GLTF before upload.
Q: What are the size limitations for object tracking? A: Ideal object sizes are 1-25 feet (0.3-7.5 meters) on the longest dimension. Keep model files under 50 MB.
Q: Can Object Tracking track moving or deformable objects? A: Object tracking is currently designed for static, rigid objects during AR sessions. Users/cameras can move around the object, but not the object itself.
Q: What's the difference between 360° View and Side View? A: 360° View recognizes the object from any angle. Side View optimizes for scenarios where viewing is mostly from sides, and typically processes faster.
Q: How long does object tracking processing take? A: Typically under 10 minutes, depending on model complexity.
Integration & Deployment
Q: Will MultiSet fit our existing stack and security requirements? A: Yes. The platform integrates with current scanners and devices, offering deployment flexibility via public cloud, private cloud/VPC, self-hosted Kubernetes, or fully on-device options. Data receives encryption in transit and at rest with enterprise identity controls.
Q: Can my scanned data stay in a private cloud? A: Yes. MultiSet offers on-premises deployments and offline SDKs, ensuring your scan data never leaves your infrastructure.
Q: Can I deploy fully offline? A: Yes. Backend and MapSets can operate on-device or on private clusters with no external network calls, meeting strict data governance and air-gapped environment requirements.
Q: How secure is my mapping data? A: Data is encrypted with AES-256 at rest and TLS 1.3 in transit. You can deploy in MultiSet Cloud, a single-tenant VPC, self-hosted Kubernetes, or fully on-device for air-gapped sites.
Q: Does MultiSet support robotics (AMR/AGV) and headsets? A: Yes. ROS/robotics integration and HMD support enable consistent localization across mobile devices, wearables, and robots.
Getting Started & Outcomes
Q: How fast can we go from pilot to campus-wide coverage? A: Teams typically establish initial localization in minutes using sample data, then expand by stitching scans into map sets for seamless multi-building continuity while maintaining consistent performance across locations.
Q: What outcomes should we expect in the first 90 days? A: Expected benefits include guided workflows, faster training, AR-assisted inspection/QA, context-aware IoT overlays, and improved robotics navigation—measured through reduced task time, error rates, downtime, and increased digital procedure adoption.
Last updated