# Multiplayer Sample

A turnkey sample for building a shared-AR experience on top of the MultiSet SDK. Two or more participants localize against the **same MultiSet Map (or MapSet)** and see each other's live pose in a shared coordinate space. When a teammate walks behind a real-world wall, the humanoid avatar automatically swaps to a skeleton silhouette so you always know where they are.

The sample supports two deployment shapes:

1. **Mobile ↔ Mobile** — the Unity app installed on two mobile devices (iOS or Android, in any combination) on the same Wi-Fi network.
2. **iOS host ↔ Meta Ray-Ban client** — the Unity app on iOS as host, the `MultisetWearable` Xcode app (from [wearable-vps-samples](https://github.com/MultiSet-AI/wearable-vps-samples.git)) as a wearable client that streams video from Meta Ray-Ban glasses. *(Wearable peer discovery uses Apple MultipeerConnectivity, so the Unity host must be iOS for this flow.)*

***

## Scene

`Assets/MultiSet/Scenes/MultiplayerSample/MultiPlayerSample.unity`

Key components in the scene:

| Component                        | Purpose                                                                                                                                                           |
| -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `SingleFrameLocalizationManager` | Localizes the device against a MultiSet Map / MapSet.                                                                                                             |
| `MultiplayerManager`             | Sends and receives pose updates over the MultipeerConnectivity bridge and spawns remote-player visuals.                                                           |
| `MultisetMultipeerBridge`        | Thin C# wrapper around the native iOS MultipeerConnectivity plugin. Must live on a GameObject named exactly `MultisetMultipeerReceiver`.                          |
| `NetworkUI`                      | UI for Start Host / Start Client, name entry, and host IP.                                                                                                        |
| `LocalizationSuccessDataHandler` | Notifies the `MultiplayerManager` when the device has successfully localized.                                                                                     |
| `MapMeshColliderSetup`           | Adds `MeshCollider`s to the loaded map mesh and hides it from the camera so it can be used for line-of-sight raycasts (drives the skeleton-through-walls effect). |

See the [SingleFrameLocalizationManager](/unity-sdk/api-reference/singleframelocalizationmanager.md) API reference for details on the localization component used here.

***

## One-time Setup

Before running the sample for the first time:

1. **Credentials** — open `MultiSetConfig` (in `Assets/MultiSet/…`) and set your `clientId` and `clientSecret` from the MultiSet dashboard.
2. **Map** — on the `SingleFrameLocalizationManager` component in the `MultiPlayerSample` scene, enter either:

   * a `mapCode` (single-map localization), **or**
   * a `mapsetCode` (multi-map localization).

   The same code must be used on **every** device that joins the session.
3. **Layer** — add a new user layer named exactly **`CollisionMesh`** (Edit → Project Settings → Tags and Layers). The `MapMeshColliderSetup` script tags the loaded map mesh with this layer so the remote avatar can switch to its skeleton representation when occluded by the real world.
4. **Build & install** — build the scene for your target platform (iOS or Android) and install the app on each participating device.

> All devices must be on the **same Wi-Fi network** so they can reach each other over LAN. The client enters the host's IP address to connect — no pairing code is required.

***

## Flow 1 — Two Mobile Devices

Use this flow when both participants run the Unity app. The two devices can be any mix of iOS and Android — the transport is Unity Netcode over UTP, so the platforms interoperate.

1. Launch the app on both devices and open the **MultiplayerSample** scene.
2. On each device, enter a **player name**.
3. On **Device A** (host): tap **Start Host**.
4. On **Device B** (client): enter Device A's **IP address** in the input field, then tap **Start Client**. Once connected, the status text will read *"Connected to host"*.
5. Point each device at the mapped space and **localize** (the `SingleFrameLocalizationManager` handles this — trigger a localization from the scene's UI). Both devices must localize against the same map before pose sharing begins.
6. Once both devices are localized, each device renders the other player's avatar at their real-world position. If the other player walks behind a physical wall that is part of the map mesh, their avatar automatically swaps to a **skeleton silhouette**, so they remain visible through occlusion.

> **Finding the host's IP:**
>
> * **iOS** — *Settings → Wi-Fi → (i) next to the network → IP Address*.
> * **Android** — *Settings → About phone → Status* (or *Settings → Network & internet → Wi-Fi → (network) → View more*).

***

## Flow 2 — Meta Ray-Ban Glasses (Wearable Client)

Use this flow when one participant is wearing Meta Ray-Ban glasses and the other is holding an iOS device running the Unity app.

**Roles are fixed in this configuration:**

* The **Unity app** always acts as **Host**, and must run on **iOS** (wearable-peer discovery uses Apple MultipeerConnectivity, which is unavailable on Android).
* The **`MultisetWearable` Xcode app** (iOS) always acts as **Client**. It streams video from the paired Meta Ray-Ban glasses, localizes on the phone, and forwards the glasses' pose to the Unity host.

### Prerequisites

1. Clone and build the companion iOS app:

   ```bash
   git clone https://github.com/MultiSet-AI/wearable-vps-samples.git
   open wearable-vps-samples/iOS/MultisetWearable/MultisetWearable.xcodeproj
   ```

   Build and install `MultisetWearable` on an iPhone that is paired with Meta Ray-Ban glasses.
2. In the `MultisetWearable` app settings, enter the **same `mapCode` / `mapsetCode`** and the same `clientId` / `clientSecret` used in the Unity scene.

### Steps

1. **Unity device** — open the `MultiplayerSample` scene, enter a host name, and tap **Start Host**.
2. **Wearable device** — launch `MultisetWearable`, pair your Meta Ray-Ban glasses, and from the landing page open **Multiplayer Demo**.
3. Tap **Join Session**. The wearable app browses the local Wi-Fi for the Unity host and connects automatically (no IP entry required — MultipeerConnectivity handles discovery).
4. On the wearable device, tap **Start Streaming & Localize**. The glasses begin streaming video to the phone, and the phone localizes that stream against the MultiSet map.
5. Once localized, the glasses' pose is forwarded to the Unity host at \~20 Hz. The Unity device now renders the Meta Ray-Ban user's avatar at their real-world position, with the same occlusion / skeleton behavior as Flow 1.
6. As the glasses wearer moves through the space, the wearable app re-localizes every \~1 s to keep the pose fresh, and the Unity host continuously updates the avatar.

***

## Related

* [Multiplayer AR](/unity-sdk/multiplayer-ar.md) — concepts behind the shared coordinate system (MapSpace) that this sample builds on.
* Unity sample — `Assets/MultiSet/Scenes/MultiplayerSample/`
* Meta Ray-Ban companion app — [wearable-vps-samples](https://github.com/MultiSet-AI/wearable-vps-samples.git), iOS target `MultisetWearable`.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.multiset.ai/unity-sdk/sample-scenes/multiplayer-sample.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
