E6. Wearable AR: Spatial Awareness
Learning Outcomes
- Differentiate wearable AR from mobile AR and explain its benefits for engineering workflows. Before class, review how real-time environment scanning and anchoring enable realistic problem-solving, and connect these capabilities to specific engineering use cases.
- Set up a Unity project for Magic Leap 2 development. In preparation, configure the Magic Leap Hub, install AR Foundation and the Magic Leap XR Plugin, import the official examples package, and build/deploy to the headset for testing.
- Implement plane detection in wearable AR. Ahead of the session, study the role of horizontal and vertical planes, and practice using ARPlaneManager with the Magic Leap XR Plugin to confirm plane detection in an example scene or simulator.
- Explain how meshing improves environmental awareness in wearable AR. For your pre-class task, understand how meshing supports collision detection and scene understanding, and review the configuration of the meshing subsystem in Unity.
- Configure and test spatial audio in wearable AR applications. Before class, learn how to set up MLListener and MLPointSource components, explore how directionality and distance affect audio, and optionally preview 3D spatial audio behavior in Unity.
What Is Wearable AR?
Wearable AR (also known as headmounted AR, headworn AR, or spatial AR) is a technology that seamlessly integrates virtual content with the physical world by continuously understanding and mapping the environment in real-time. Devices like the Magic Leap 2 and HoloLens 2 are prime examples of wearable AR technology. These devices use a combination of cameras, sensors, and specialized software to scan, reconstruct, and interpret the surrounding environment, enabling digital objects to anchor, occlude, and interact with the real-world geometry in a natural and believable manner. The real power of wearable AR comes from its ability to tie digital content directly to the user’s physical surroundings, creating a unified, immersive experience that feels both interactive and dynamic. This close integration between digital and physical spaces is particularly impactful in a wide range of fields, from entertainment to industrial applications, with a heavy emphasis on engineering use cases.
Use Cases
Wearable AR offers a comprehensive suite of capabilities that allow users to interact with the world in new and innovative ways. These include the ability for engineers and designers to visualize complex assemblies in their actual locations, guide assembly and maintenance tasks with step-by-step holographic instructions, validate the fit and tolerance of virtual parts against real-world surfaces, train operators in immersive environments, and collaborate remotely with team members over shared virtual spaces. This broad spectrum of applications significantly enhances productivity, decision-making, and safety in engineering, manufacturing, and beyond. Common use cases include:
-
Visualization of Complex Assemblies: Engineers can overlay 3D CAD models directly onto physical machinery or structures to see how parts fit together, identifying potential issues or improvements before actual construction or assembly begins.
-
Guided Assembly and Maintenance: Wearable AR devices can provide real-time, holographic instructions that are anchored to specific parts of the machinery. This step-by-step guidance helps technicians assemble or maintain equipment with high precision, reducing human error and downtime.
-
Fit and Tolerance Validation: Wearable AR can detect real-world surfaces and edges, then mesh virtual parts against these physical components. This helps engineers check for fit and tolerance issues in real-time, improving product design and quality assurance processes.
-
Operator Training: AR can simulate realistic operational environments for training purposes, providing operators with spatialized sound cues and natural input methods to practice tasks without the need for actual physical equipment. This is particularly valuable for hazardous or complex procedures that require practice before they can be performed in the real world.
-
Remote Collaboration: Wearable AR enables teams to collaborate across distances by sharing spatial anchors and telepresence views that are tied to specific factory zones or workspaces. This allows team members to interact with virtual representations of objects and provide guidance or feedback in real-time, regardless of their physical location.
-
Prototyping and Design Iteration: Engineers can quickly test and iterate on prototypes by virtually adding or modifying parts in their real-world environments. This accelerates the design process and reduces the time and cost associated with physical prototyping.
-
Enhanced Safety and Hazard Detection: In high-risk environments, wearable AR can be used to identify hazards and provide real-time safety alerts or guidance. This could include highlighting unsafe areas, offering safety reminders, or guiding workers through safe operations.
-
Interactive Customer Demonstrations: In product development and marketing, AR can be used to demonstrate products in an interactive, immersive way. Customers or stakeholders can explore virtual models of products within their own environments, making the sales or design process more engaging and effective.
While this content primarily focuses on Magic Leap 2 as an example, the principles and concepts discussed apply broadly to other devices, such as HoloLens 2, enabling a wide range of AR applications across different industries.
Spatial Awareness
For wearable AR devices like Magic Leap 2 or HoloLens 2, spatial awareness is foundational to blending digital content seamlessly with the physical environment. These systems rely on spatial mapping technologies that allow virtual objects to behave as if they truly exist in the same space as the user.
-
Plane Detection: Automatically discovers horizontal and vertical planes (floors, tables, walls) so you can place and align virtual models in real space. In wearable AR, this enables stable placement of digital instructions, guides, or schematics directly on relevant surfaces in the user’s field of view.
-
Meshing: Builds a full 3D mesh of your environment—capturing every surface, corner, and obstacle—for physics interactions, collision testing, and realistic occlusion. This continuous environmental mapping allows AR headsets to adapt content dynamically as the user moves through complex spaces like factory floors or construction sites.
-
Occlusion: Uses meshed geometry (or depth data) to hide virtual objects behind real‑world objects, preserving correct depth cues and immersion. For wearable AR, this ensures that virtual overlays respect real-world positioning, which is critical for training simulations and precise assembly tasks.
-
Spatial Audio: Renders sounds in true 3D around you—bouncing off walls and attenuating behind obstacles—so that audio cues draw your attention to critical areas in a training or simulation scenario. With wearable AR, spatial audio enhances situational awareness, guiding users even when visual attention is occupied or obstructed.
These core spatial awareness topics will be covered in the remainder of this session. They provide the foundation for creating spatially aware, wearable AR apps in practical scenarios.
Tracking Objects
Wearable AR devices excel at recognizing and tracking objects in the real world, enabling persistent and context-aware interactions. This capability ensures that digital overlays remain accurately positioned even as the user or environment changes.
-
Image Tracking: Detects printed markers (QR, ArUco, barcodes) to instantly anchor content to precise physical locations—ideal for equipment tagging, part identification, and AR‑driven instructions. On headsets, this allows hands-free access to step-by-step guidance or technical data whenever a tagged object is viewed.
-
Spatial Anchors: Saves and restores world‑locked anchor points across sessions, letting users reopen an application and see holograms exactly where they left them—crucial for multi‑day inspections or collaborative walkthroughs. In wearable AR, this persistence ensures that project progress or inspection notes remain tied to specific machinery or locations without re-calibration.
Image tracking and spatial anchors will be covered in E7. These concepts are essential for creating stable and precise AR experiences.
Multimodal User Input
Wearable AR devices provide a range of input methods that let users interact with digital content in the most natural or practical way for their environment. These multimodal options enhance flexibility, whether users are hands-free, operating tools, or navigating complex data.
-
Controller Inputs: Leverages the controller’s bumper, trigger, touchpad and/or buttons to navigate menus, manipulate 3D gizmos, or scrub through timelines. In wearable AR, the controller provides precision and tactile feedback for tasks like CAD manipulation or detailed configuration in field applications.
-
Voice Commands: Offers hands‑free control and menu navigation via custom voice intents—helpful in scenarios where users’ hands are busy handling tools or equipment. For wearable AR users, this enables uninterrupted workflows, such as technicians calling up schematics while actively servicing machinery.
-
Hand Gestures: Allows intuitive pinch, grasp, and point gestures to pick up, rotate, or scale virtual objects—mimicking real‑world tool interactions without a controller. This gesture control in wearable AR fosters natural interaction in sterile or hazardous environments where physical controllers aren’t practical.
-
Gaze: Detects gaze direction, blinks, and fixation points to highlight or select UI elements just by looking at them—enabling faster, more natural workflows and powerful analytics on user attention. On head-mounted devices, gaze tracking enhances UI responsiveness and can streamline data collection on how users engage with complex visual information.
These multimodal user inputs for Magic Leap 2, including controller inputs, voice commands, hand gestures, and gaze, will be covered in E8. Mastering these concepts will enable you to create more natural and intuitive interaction with AR environments.
Developing for Magic Leap 2
Deploying your AR application to Magic Leap 2 enables immersive, hands-free experiences on a spatial computing headset designed for enterprise and industrial applications. This tutorial walks you through the steps to develop and deploy a Hello World app to the Magic Leap 2 using Unity 6 LTS, AR Foundation, XR Interaction Toolkit, and OpenXR. We will create a new scene (ML101.unity
) in your existing AR Foundation project, configure it for Magic Leap 2 with OpenXR, and deploy it to the headset.
Configure Magic Leap Hub
To build applications for Magic Leap 2, you need to install Magic Leap Hub 3, which manages SDKs, example projects, and essential tools like MRTK for Magic Leap.
- Install Magic Leap Hub 3:
- Visit the Magic Leap Developer Portal.
- Navigate to the Download: Magic Leap 2 Tools section.
- Download the Magic Leap Hub 3 installer for your operating system.
- Run the installer and complete the setup wizard.
- For Windows Users:
- Install the Microsoft Visual C++ Redistributables from the official Microsoft support page.
- Download the redistributable package for Visual Studio 2015, 2017, 2019, and 2022.
- Install the x64 version (typically required).
Failure to install these redistributables may cause the ML Hub or Application Simulator to fail.
- Install Magic Leap Packages for Unity and OpenXR:
- Launch Magic Leap Hub 3.
- In the left sidebar, click Packages.
- In the Package Manager, install Unity® Package and OpenXR Samples. Optionally, install Unity® Examples.
Setup the Unity Project
The Magic Leap Examples Project is a Unity package showcasing sample scenes that demonstrate key Magic Leap 2 features using OpenXR, including interaction, spatial mapping, eye tracking, and more. It serves as a practical starting point for developers to learn and prototype XR applications on the Magic Leap platform. We will begin by using the Magic Leap Examples Project as a foundational starting point to understand core XR concepts and features, and gradually add custom capabilities and interactive experiences.
- Open the Installed Project:
- Navigate to
%USERPROFILE%\MagicLeap\tools\unity<SDK_VERSION>\MagicLeap_Examples
. Replace<SDK_VERSION>
with the version you installed via Magic Leap Hub 3. - Open Unity Hub.
- Click
Open
and navigate to theMagicLeap_Examples
folder. - Select the folder and click
Open
. - To avoid modifying the original samples, go to
File > Save As (Project)
(or manually copy the project folder). - Rename and save it, for example:
MagicLeapX
. - Review the packages included in the preconfigured project via
Package Manager
as well as the Magic Leap features underXR Plug-in Management > OpenXR Feature Group
.
- Navigate to
- Explore the Built-In Example Scenes:
- Go to
Assets/Scenes/
. - Open and explore the built in scenes, which demonstrate core Magic Leap 2 features and provide inspiration for your custom developments.
- Hello, Cube: A simple red cube floating in space — the default scene that serves as a “Hello World” for Magic Leap development.
- Control: Demonstrates best practices for interacting with the Control controller, including 6DoF tracking and connection status.
- Eye Tracking: Uses the Gaze Interaction profile to track your eye gaze and display feedback.
- Facial Expression: Showcases facial expression tracking capabilities when the necessary hardware and permissions are available.
- Global Dimmer: Demonstrates environmental dimming to reduce distractions around the user.
- Hands: Maps OpenXR hand gestures to Unity Input Actions using the XR Hands package. Tracks pinch gestures and joint data.
- Light Estimation: Captures ambient light data from the environment to adapt scene lighting dynamically.
- Marker Tracking: Detects and tracks visual markers in the environment for AR anchoring and content placement.
- Meshing: Visualizes the real-world mesh detected by the Magic Leap’s sensors, useful for spatial mapping.
- Planes: Detects surfaces (vertical, horizontal) and tags them semantically, such as floors, walls, ceilings, and tables.
- Pixel Sensor: Captures low-level pixel data for custom computer vision applications.
- Segmented Dimmer: Demonstrates segmented environmental dimming via alpha blending with specific rendering settings.
- Spatial Anchor: Provides spatial anchoring capabilities to pin virtual objects to real-world locations.
- Occlusion: Shows how to occlude virtual content with real-world geometry for more realistic integration.
- Voice Intents: Implements voice command recognition and intent handling within XR applications.
We will reference and build upon several of these examples throughout the course to create customized, interactive XR applications tailored for engineering contexts.
- Go to
- Explore the Scene and Hierarchy (HelloCube):
- Open the
HelloCube.unity
scene and explore its key components in theHierarchy
window. Understanding these will help you extend and customize interactions later. ML Rig
: This is the main rig responsible for tracking the user’s head and controller. It containsCamera Offset
, which containsMain Camera
andController
.Main Camera
: Represents the Magic Leap 2 user’s head position and orientation.Controller
: Represents the physical Magic Leap single controller (Magic Leap 2 supports only one controller). Explore theXR Controller (Action-based)
component on this object. It defines how controller input is mapped to interactions like grabbing or pointing. Review its other components such asXR Ray Interactor
(for pointing and selecting UI or objects),Line Renderer
(visualizes the interaction ray), andInput Action Manager
(manages input bindings for the controller).UserInterface
: This GameObject contains a canvas that holds menus, buttons, and instructional text. UI elements provide on-device guidance and allow interaction with the sample application.
- Open the
Deploy to Magic Leap 2
To run your Unity project on the Magic Leap 2 headset, you will need to configure Unity to build for the Android platform and ensure the headset is properly connected to your computer. Here are the essential steps to deploy your first app:
- Configure Build Profiles:
- In Unity, navigate to
File > Build Profiles
. - Go to
Scene List
and verify that all Magic Leap example scenes are selected.
- In Unity, navigate to
- Set Platform to Android:
- Select
Android
from the platform list. - If it is not already selected, click
Switch Platform
. - If it is not installed, install it via Unity Hub.
- Select
- Connect Your Magic Leap 2 Headset:
- Power on your Magic Leap 2 headset.
- Connect it to your computer using USB or set up ADB over WiFi via Magic Leap Hub 3.
- Open Magic Leap Hub 3 on your computer.
- Navigate to the Devices tab.
- Ensure your Magic Leap 2 appears as Connected. This confirms that the device is ready to receive a build.
- Select Run Device in Unity
- Return to Unity’s
Build Profile
window. - Under
Run Device
, choose your connected Magic Leap 2 from the dropdown list. - Click
Build and Run
to compile the project and deploy it directly to the headset. - The app will automatically launch on the device after deployment.
- Return to Unity’s
- Explore the Examples App:
- Launch the Magic Leap Examples app on your device.
- Select each example scene from the menu and explore its capabilities.
Plane Detection
Plane detection enables AR devices to identify large, flat surfaces in the user’s environment—such as floors, tables, walls, and ceilings—by analyzing the spatial mesh generated by onboard sensors. This capability is fundamental in AR experiences because it allows virtual objects to be accurately positioned and anchored to real-world surfaces, making them appear as if they truly exist within the physical space. Plane detection not only enhances realism by keeping virtual content “grounded” but also supports interactions like hit-testing, collision detection, and occlusion with the physical world.
The virtual V8 engine from XFactory can be placed onto the detected physical floor surface using
ARPlaneManager.trackables
. Once anchored, the engine remains stable and aligned with the real-world plane, even as the user moves around. If the user points the device at a vertical plane like a wall or equipment rack, raycasting can be used to enable contextual interactions—such as triggering an exploded view of the engine or displaying specifications—only when the engine is near these surfaces, simulating proximity-aware behavior.
Core Concepts
-
AR Foundation: Unity’s cross-platform framework for building AR experiences on multiple devices, including Magic Leap 2, HoloLens, and mobile AR platforms like ARKit and ARCore. AR Foundation abstracts away the platform-specific APIs, allowing you to build once and deploy across supported devices. Here, we will use AR Foundation’s
ARPlaneManager
to enable real-time plane detection in Magic Leap 2. -
ARPlaneManager
: A component in AR Foundation that discovers, tracks, and updates information about planar surfaces in the environment. It exposes detected planes via thetrackables
property, which you can query to place virtual objects or visualize surfaces in the scene. It also emits events when planes are added, updated, or removed, enabling dynamic interaction logic. -
Magic Leap 2
PlanesSubsystem
: This is the Magic Leap–specific implementation of Unity’sXRPlaneSubsystem
, which runs beneath AR Foundation’sARPlaneManager
. It handles the actual detection of planes on Magic Leap devices by processing sensor data to identify flat surfaces in the user’s surroundings. Understanding that Magic Leap 2 uses its ownPlanesSubsystem
helps explain device-specific behaviors and optimizations. -
PlanesQuery
: A Magic Leap–specific data structure that configures plane detection behavior. It lets you control where to search for planes (e.g., within a certain bounding box), how many planes to return, and what size planes to consider (via minimum and maximum thresholds). This fine-tuning is crucial for performance and for tailoring detection to your application’s needs. MLPlanesQueryFlags
: These are Magic Leap–specific flags used withinPlanesQuery
to specify additional features for plane detection. These flags help developers gain more control and richer data for spatial reasoning. Examples include:- Polygons: Request the full polygon outline of detected planes, not just a center point and orientation.
- Semantic Types: Filter planes by semantic categories like floor, ceiling, or wall.
- Inner Planes: Detect holes or voids within larger planes.
- Spatial Mapping Permission: On Magic Leap 2, plane detection requires the spatial mapping permission, which is classified as a “dangerous” runtime permission. You must explicitly request this permission from the user via the Magic Leap API; otherwise, plane detection will fail. Always include proper user prompts and handling for permission denial.
We can customize the
PlanesQuery
to prioritize horizontal planes of sufficient size and request polygon data to render an accurate visual mesh around the placement area. This approach ensures objects are stably placed and visually aligned with the user’s physical environment.
Implementation
To better understand the functionality and affordances of plane detection in wearable, let’s work on an example of implementing plane detection on Magic Leap 2 and placing the Engine V8 model from XFactory on detected floors. To make this happen, we will place the Engine V8
prefab on the first detected horizontal plane, while offsetting it from the headset forward direction so it doesn’t collide with the user. We will also allow the user to rotate the engine around the Y-axis using the Magic Leap 2 controller touchpad (left/right swipe).
- Setup the Scene:
- Open the Magic Leap examples project opened in Unity.
- Open the
Planes.unity
scene, save it asPlaneDetection.unity
. To avoid mixing things up with Magic Leap’s preconfigured scenes, save it in a separate folder (e.g.,Assets > Scenes > XRE Tutorials
). - Optionally, disable the
UserInterface
GameObject to reduce visual clutter. - Ensure that
ML Rig
is configured withXR Origin
,Input Action Manager
,AR Session
,AR Plane Manager
, andAR Raycast Manager
.
These managers enable plane detection (
ARPlaneManager
) and raycasting support (ARRaycastManager
) required for spatial understanding. TheExample
script (PlaneExample.cs
) ensures permissions are granted and activates plane detection on demand via the Bumper button, making plane data available at runtime. - Import and Add the Engine V8 Model:
- Import the provided
Engine V8.unitypackage
into Unity viaAssets > Import Package > Custom Package...
. - In the
Hierarchy
, create an empty GameObjectEngineSpawner
. - Drag the
Engine V8
prefab underEngineSpawner
. - Disable the
Engine V8
by default.
- Import the provided
- Create the Engine Placement Script:
- In
Assets > Scripts
, create a new folder namedXRETutorials
to keep things organized. - In that folder, create
PlaceEngine.cs
:
using UnityEngine; using UnityEngine.XR.ARFoundation; using UnityEngine.XR.ARSubsystems; using UnityEngine.XR.MagicLeap; public class PlaceEngineOnPlane : MonoBehaviour { public ARPlaneManager planeManager; public GameObject enginePrefab; public float offsetDistance = 1.0f; // meters in front of headset private MagicLeapInputs mlInputs; private MagicLeapInputs.ControllerActions controllerActions; private bool enginePlaced = false; void Start() { mlInputs = new MagicLeapInputs(); mlInputs.Enable(); controllerActions = new MagicLeapInputs.ControllerActions(mlInputs); controllerActions.Trigger.performed += _ => MoveEngineToPlane(); } private void MoveEngineToPlane() { if (enginePlaced || planeManager == null) return; foreach (var plane in planeManager.trackables) { if (plane.alignment == PlaneAlignment.HorizontalUp || plane.alignment == PlaneAlignment.HorizontalDown) { Vector3 planePosition = plane.transform.position; // Offset engine forward from headset Vector3 headsetPosition = Camera.main.transform.position; Vector3 headsetForward = Camera.main.transform.forward; Vector3 offset = headsetForward.normalized * offsetDistance; Vector3 targetPosition = planePosition + offset; enginePrefab.transform.position = targetPosition; // Align engine's rotation to plane's Y rotation float yRotation = plane.transform.eulerAngles.y; enginePrefab.transform.rotation = Quaternion.Euler(0, yRotation, 0); enginePrefab.SetActive(true); enginePlaced = true; Debug.Log( $"Engine placed at {targetPosition} " + $"with Y rotation {yRotation}" ); return; } } Debug.Log("No horizontal plane found."); } }
- In
- Configure the Script:
- Create an empty GameObject in the
Hierarchy
. Name itInteractionManager
. - Click
Add Component
and choosePlaceEngine
. - Drag the
ML Rig
from theHierarchy
into thePlane Manager
field of the script. - Drag your
Engine V8
GameObject (underEngineSpawner
) into theEngine Prefab
field. - Set
Offset Distance
to control how far in front of the headset the engine is placed on the plane (e.g., 1-2 meters).
- Create an empty GameObject in the
- Create the Touchpad Rotation Script:
- In the
Project
window, go toAssets > Scripts > XRETutorials
. - Right click and create a new script. Name it
RotateWithTouchpad.cs
.
using UnityEngine; using UnityEngine.XR.MagicLeap; public class RotateWithTouchpad : MonoBehaviour { public float rotationSpeed = 50f; private MagicLeapInputs mlInputs; private MagicLeapInputs.ControllerActions controllerActions; void Start() { mlInputs = new MagicLeapInputs(); mlInputs.Enable(); controllerActions = new MagicLeapInputs.ControllerActions(mlInputs); } void Update() { Vector2 touchpadInput = controllerActions.TouchpadPosition .ReadValue<Vector2>(); float horizontalInput = touchpadInput.x; if (Mathf.Abs(horizontalInput) > 0.1f) { transform.Rotate( Vector3.up, horizontalInput * rotationSpeed * Time.deltaTime, Space.World ); } } }
- In the
- Configure the Script:
- Select the
Engine V8
GameObject in theHierarchy
. - Click
Add Component
and selectRotateWithTouchpad
. - In the
Inspector
, adjust theRotation Speed
value to control how fast the engine rotates when swiping on the ML2 controller’s touchpad. For example, 50 degrees per second is a good starting point.
- Select the
- Deploy and Test the Behavior:
- Build and deploy the scene to your Magic Leap 2 device using the standard build process.
- On device, run the app.
- Press the Bumper button to start plane detection.
- As planes are detected, they will be color-coded based on classification:
- Green: Floor
- Red: Wall
- Blue: Ceiling
- Yellow: Table
- Gray: Other/Unclassified - Press the Trigger on the controller to place the engine model on the first detected horizontal plane, offset in front of the headset to avoid overlap. - Swipe left or right on the touchpad to rotate the engine around the Y-axis (vertical axis).
If all behaviors match, your plane detection, placement, and rotation interactions are working as expected!
Meshing
Meshing in AR reconstructs the real-world environment in real-time, enabling richer interactions with virtual content. It produces a detailed 3D representation of the user’s surroundings, including walls, floors, furniture, and irregular surfaces. This reconstructed geometry allows virtual objects to interact with the real world through collisions, alignment, or environmental awareness. It is essential for building AR experiences where virtual content responds naturally to the user’s space.
Meshing in wearable AR devices like Magic Leap is optimized for continuous, wide-area scanning with real-time responsiveness, but typically produces lower-density meshes compared to handheld devices like iPhones with LiDAR. In contrast, iPhones generate denser and more detailed meshes in short-range, on-demand scans, though they are less suited for persistent, large-scale spatial mapping.
Core Concepts
-
Meshing: Scans and reconstructs the user’s surroundings into a dynamic 3D mesh. Unlike plane detection, meshing captures complex, irregular, and vertical surfaces.
This provides a detailed spatial map that enables virtual objects to interact meaningfully with the real world. -
ARMeshManager
: A Unity AR Foundation component that manages the detection and real-time updating of environment meshes.
It continuously updates the mesh data as the user moves, ensuring the virtual experience reflects the current physical space. -
Mesh Prefab: A prefab assigned to the ARMeshManager that defines how each generated mesh chunk is rendered and which colliders are applied.
This determines both the visual appearance of the scanned environment and the presence of physics colliders for interaction. -
Mesh Collider: A collider component attached to the mesh prefab that allows virtual objects to detect and interact with real-world geometry through physics collisions.
Without it, virtual objects like the drone could pass through the scanned mesh without resistance or feedback. -
Spatial Mapping Permission: A required runtime permission that grants the application access to scan the user’s physical environment on Magic Leap devices.
This permission is essential for privacy and safety, as it controls access to environmental data. -
Physics.Raycast: A Unity method used to detect when a virtual object is about to collide with real-world geometry, useful for implementing collision-aware movement. It enables behaviors like stopping or rerouting virtual objects when obstacles are detected in their path.
Implementation
We will build a tutorial where the Magic Leap 2 device scans the surrounding space and generates meshes. A virtual drone is spawned in front of the user. Using the Magic Leap controller, the user can move the drone in 3D space while respecting the scanned environment. The drone stops moving when it approaches a real-world mesh, preventing it from passing through physical structures.
- Duplicate the Meshing Scene:
- Open the
Meshing.unity
scene from the Magic Leap examples project. - Save it as
Assets > Scenes > XRE Tutorials > Meshing.unity
.
- Open the
- Import the Drone Package:
- Right-click in the
Project
window. - Select
Import Package > Custom Package
. - Import the provided
Drone.unitypackage
. - You will now see multiple
Drone
prefabs in theProject
window. - The drone models are slightly larger than normal. Open and rescale them in prefab mode (e.g., to 15%).
- Right-click in the
- Place the Drone in Front of the User:
- In the
Hierarchy
, locateML Rig > Camera Offset > Main Camera
. - Drag a
Drone
prefab into the scene (as a root object, not a child of the camera). - Reset its transform, then set its
Position
toX: 0
,Y: 0.5
,Z: 2
. This places the drone approximately 2 meters ahead and slightly above the user’s initial position.
- In the
- Add a DroneController Script:
- Go to
Assets > Scripts > XRE Tutorials
. - Create a new
DroneController.cs
script and drag it onto theDrone
prefab in the scene. This script enables the drone to move in 3D space based on controller input while preventing it from passing through scanned environment meshes by using raycasting.
using UnityEngine; using UnityEngine.XR; using UnityEngine.XR.ARFoundation; using System.Collections.Generic; public class DroneController : MonoBehaviour { public float moveSpeed = 0.5f; public float checkDistance = 0.5f; private Transform droneTransform; private Camera mainCamera; private ARMeshManager meshManager; private InputDevice controllerDevice; private bool moveUpMode = true; // Toggle for up/down private bool triggerHeld = false; private void Start() { droneTransform = transform; mainCamera = Camera.main; meshManager = FindObjectOfType<ARMeshManager>(); var inputDevices = new List<InputDevice>(); InputDevices.GetDevicesAtXRNode( XRNode.RightHand, inputDevices ); if (inputDevices.Count > 0) { controllerDevice = inputDevices[0]; Debug.Log( "Magic Leap controller detected: " + controllerDevice.name ); } else { Debug.LogError("No controller device found."); } } private void Update() { if (!controllerDevice.isValid) return; Vector2 axisInput = Vector2.zero; controllerDevice.TryGetFeatureValue( CommonUsages.primary2DAxis, out axisInput ); Vector3 direction = (mainCamera.transform.right * axisInput.x) + (mainCamera.transform.forward * axisInput.y); // Check if trigger is pressed bool triggerPressed; controllerDevice.TryGetFeatureValue( CommonUsages.triggerButton, out triggerPressed ); if (triggerPressed && !triggerHeld) { // Toggle between up and down when trigger is pressed down moveUpMode = !moveUpMode; triggerHeld = true; Debug.Log( "Toggled movement mode: " + (moveUpMode ? "Move Up" : "Move Down") ); } else if (!triggerPressed) { triggerHeld = false; } if (triggerPressed) { direction += moveUpMode ? Vector3.up : Vector3.down; } if (direction != Vector3.zero) TryMove(direction.normalized); } private void TryMove(Vector3 direction) { if (!Physics.Raycast( droneTransform.position, direction, out RaycastHit hit, checkDistance )) { droneTransform.position += direction * moveSpeed * Time.deltaTime; } else { Debug.Log( "Movement blocked by environment mesh." ); } } }
- Go to
- Confirm Mesh Prefab Configuration:
- In the
ARMeshManager
, locate theMesh Prefab
. - Confirm that it has a
MeshRenderer
with a visible material and aMeshCollider
attached for collision detection. - If the
MeshCollider
is missing, open theMesh Prefab
, and add aMeshCollider
component.
- In the
- Deploy and Test the Behavior:
- Build and deploy the
Meshing.unity
scene to your Magic Leap 2 device using the standard build process. - On device, run the app and allow it to scan the environment.
- The surrounding environment will be dynamically reconstructed into a 3D mesh.
- A drone will appear approximately 2 meters in front of you.
- Use the touchpad/thumbstick to move the drone forward, backward, left, and right relative to your viewpoint.
- Press and hold the Trigger button to move the drone vertically — either up or down depending on the current mode.
- Tap the Trigger button to toggle between move up and move down modes.
- If the drone approaches any scanned mesh, it will stop moving in that direction to avoid collision.
- Build and deploy the
If all behaviors function as described, you have successfully integrated real-time meshing with spatially aware drone navigation!
Spatial Audio
Traditionally, audio is captured and played back in two channels (left/right), which gives a sense of directionality but lacks true 3D fidelity. Spatial audio uses algorithms that take into account both the listener’s head position/orientation and the virtual sound source location to reproduce how sound behaves in real space—allowing the user to tell if a sound is coming from in front, behind, above, or below. Spatial audio offers several key benefits:
-
Immersion: Spatial audio dramatically enhances the user’s sense of presence by replicating how sound behaves in the real world. When audio responds naturally to your position and head movements, virtual objects feel physically present and integrated within the environment—not just visually, but audibly.
-
Intuitive Interaction: Audio offers an additional sensory channel that guides user attention without needing visual indicators. Sounds can signal when the drone is nearby, hidden, or behind the user, reducing the cognitive load required to track objects visually.
-
Accessibility: For users with limited visibility or in low-light conditions, spatial audio helps maintain awareness of virtual object positions, improving overall accessibility of the MR experience.
-
Enhanced Feedback Loops: Combining spatial audio with visual feedback provides richer, multi-sensory interaction. For example, hearing the drone getting louder as it approaches can signal proximity before it’s even visible, reinforcing user confidence in their interactions.
-
Training and Simulation: Many real-world tasks, especially in engineering and industrial environments, rely on audio cues for safety, diagnostics, and situational awareness. Spatial audio enables realistic simulations where users must localize and interpret sounds like machinery noise, alarms, or warning beeps—skills critical for operating complex equipment or navigating hazardous spaces.
In the drone example, the drone emits spatialized propeller sounds that dynamically respond to its position relative to the user. As the drone flies closer to the user, the propeller sound gets louder and spatially more distinct. As the drone moves farther away, the sound diminishes naturally. When the drone flies past or around the user, the audio pans left or right, simulating real-world movement.
Core Concepts
-
Soundfield Plugin: Magic Leap’s Unity package for spatializing audio. It enables integration of spatial audio processing in Unity, ensuring sound sources behave realistically based on spatial positioning and user orientation.
-
MSA Spatializer: The audio spatializer plugin setting for 3D sound rendering. This plugin processes how sounds are perceived in 3D space, affecting how direction, distance, and environmental factors influence the sound reaching the user.
-
MSA Ambisonic Decoder: The ambisonic decoder plugin setting for surround sound formats. It decodes ambisonic audio files to render immersive, full-sphere surround sound, creating ambient environmental audio that envelops the user.
-
MLListener
: Component on the camera that drives spatial audio perception. It acts as the user’s “ears” in the scene, capturing positional data needed to spatialize incoming audio based on the listener’s head position and rotation. -
MLPointSource
: Component on each Audio Source that configures advanced spatial audio properties like directivity and gain. This allows precise control over how sound is projected from a source, including the direction it emits sound most strongly and how it attenuates over distance.
Implementation
To bring the drone to life audibly, we will add 3D spatial audio that simulates the sound of its propellers in space. This setup ensures that as the drone moves closer or farther from the user, the audio dynamically reflects its position and movement.
- Add an Audio Source to the Drone:
- Select the
Drone
GameObject in theMeshing.unity
scene. - Add an
Audio Source
component. - Assign a looping drone sound clip (e.g.,
Assets > Drone > Audio > ELECTRIC DRONE
) toAudio Resource
. - Enable
Play On Awake
. - Enable
Loop
. - Set
Spatial Blend
to1 (3D)
. - Choose
Logarithmic Rolloff
for natural distance-based fading. - Adjust
Min Distance
andMax Distance
to define how close the sound is loud and how far it fades out.
- Select the
- Confirm the Audio Listener:
- Verify that the
Main Camera
(underML Rig > Camera Offset > Main Camera
) has theAudio Listener
component. - This is typically already present by default. If missing, add
Audio Listener
.
- Verify that the
- Deploy and Test the Behavior
- Build and deploy the
Meshing.unity
scene to your Magic Leap 2 device. - Run the app and let the environment scan complete.
- The drone will spawn approximately 2 meters ahead.
- Move the drone.
- As the drone approaches you, the propeller sound should get louder.
- As the drone moves away, the sound should naturally fade.
- Move around the drone in space—its sound should pan left, right, front, and back based on your position.
- Build and deploy the
Consider adjusting the
Max Distance
on theAudio Source
to control how far the sound can be heard, modifyingSpread
to control how directional the sound is, or adding a second sound layer like a low rumble or beep to simulate more complex drone feedback.
Key Takeaways
Wearable AR, exemplified by devices like Magic Leap 2 and HoloLens 2, delivers immersive, hands-free experiences by combining real-time spatial awareness, environmental mapping, and multimodal input. Through capabilities such as plane detection, meshing, occlusion, and spatial audio, engineers can anchor digital content to real-world contexts, interact naturally via controllers, gestures, voice, or gaze, and create precise, context-aware applications for tasks like assembly guidance, training, and remote collaboration. Developing for wearable AR in Unity involves configuring the proper toolchains, understanding core XR components, and leveraging device-specific features to build applications that are both technically robust and practically valuable in engineering workflows. By integrating visual, spatial, and auditory cues, developers can produce AR solutions that not only enhance productivity and safety but also create deeply engaging, context-rich user experiences.