Simultaneous Localization and Mapping (SLAM)

Your constantly-updated definition of Simultaneous Localization and Mapping (SLAM) and collection of videos and articles. Be a conversation starter: Share this page and inspire others!
93 shares

What is Simultaneous Localization and Mapping (SLAM)?

SLAM (simultaneous localization and mapping) is the method computers use to perceive their environment. In user experience (UX) design, it is most relevant to AR (Augmented Reality).

Show Hide video transcript
  1. Transcript loading…
Video copyright info

Copyright holder: KEE JOON HONG Appearance time: 0:15 - 0:26 Copyright license and terms: CC BY Link: https://www.youtube.com/watch?v=dlZRGJx4g8A&ab_channel=KEEJOONHONG

Copyright holder: SpectreXR Appearance time: 0:26 - 0:38 Copyright license and terms: CC BY Link: https://www.youtube.com/watch?v=-zysckX_AOg&ab_channel=SpectreXR

Cameras and laser sensors perceive distance and surfaces to project holograms on an AR display as if they were in real space. SLAM’s other applications include self-driving vehicles, drones and other technologies.

How Does SLAM Work?

An illustration depicting the three stages of how an AR app works. Step one: sensing. The phone sees a flower with the camera. Step two: Recognition. The phone recognizes the flower as a specific flower by comparing against other flowers. Step three: Display. The Phone screen shows an animated bee flying near the flower with a speech bubble that says

© Interaction Design Foundation, CC BY-SA 4.0

SLAM's exact process might vary from device to device, as some are not equipped with distance-sensing lasers or dual cameras.

However, the essential function of SLAM is to identify surfaces using a camera and image recognition software. Devices with lasers can judge the distance using focal lengths and can also judge distance by triangulating from multiple perspectives. This helps with spatial cognition and field of view. This is identical to the depth-perceiving binocular vision that humans have. However, as the human eye can be fooled, so can SLAM.

SLAM Errors

SLAM errors must be handled correctly to ensure an excellent overall user experience. When SLAM fails, it should be easy for users to recalibrate the sensor or a UI prompt should instruct them to move around the area or reposition the sensors for a better look. The visual design should communicate the error and how to correct it.

A product or service should also be aware of situations where SLAM cannot perceive something, like a dark or shiny surface that reflects a lot of light. This can be relatively simple for lower-stakes interactions like AR entertainment, as it will be evident to the user that something is not mapped correctly.

© Interaction Design Foundation, CC BY-SA 4.0

However, this can be a more severe issue for other situations like self-driving cars and drones.

SLAM in Augmented Reality

SLAM is the foundation of augmented reality (AR.) It allows AR devices like AR glasses or mobile phones to perceive the world in three dimensions. AR apps can then identify objects or images in the real-world environment and project virtual content on the AR displays so it appears in the real world.

SLAM also provides the necessary information for the display to match the surface’s positioning and perspective needed for the object to look natural —which adds to the level of immersion.

Sophisticated SLAM technologies can recognize human faces by applying virtual filters or makeup. AR headsets often have a way of warning the user if they are too distracted to notice a wall or object they might run into.

Information overlays like Google Lens use SLAM and AI software to identify text, translate it, and display it in AR.

What are some other applications of SLAM besides AR?

In the case of drones and self-driving cars, ensuring that the interface notifies the user to take manual control when SLAM malfunctions or cannot perceive surfaces correctly is even more critical.

Even though virtual reality (VR) uses virtual environments, VR headsets often use SLAM sensors to let players know where they can walk in VR without walking into real walls or chairs. This becomes especially true for mixed reality (MR) or extended reality (XR) programs as well.

More experimental applications of SLAM include photogrammetry, which uses SLAM sensors to scan real objects as 3D models.

Learn More About SLAM

Take our course on UX Design for Augmented Reality.

Show Hide video transcript
  1. Transcript loading…

To discover even more about SLAM, read this in-depth article: Basics of AR: SLAM – Simultaneous Localization and Mapping

Questions related to Simultaneous Localization and Mapping (Slam)

How does SLAM work in simple terms?

SLAM—Simultaneous Localization and Mapping—lets a device such as a robot or AR (augmented reality) headset build a map of its surroundings and determine its own location within that map at the same time.

It gathers data from sensors like cameras, lidar, or IMUs (inertial measurement units), then continuously matches new observations to known features, updating the map and the device position using filtering techniques like particle filters or Kalman filters. This continuous process keeps the device aware of where it is, even in unfamiliar spaces, without relying on external references like GPS.

Find helpful insights about where AR is taking design and where designers might take AR in our article Augmented Reality – The Past, The Present and The Future.

What problems does SLAM solve?

SLAM solves two interlinked challenges in uncharted environments: mapping the space and tracking position within that map, both in real time. It enables devices without preloaded maps to navigate safely, avoid obstacles, and anchor virtual content accurately.

Applications for this include self-driving cars, drones, AR/VR systems, and robots working indoors or in GPS-denied spaces—empowering them with spatial awareness where external systems do not exist to be able to help.

Explore a wealth of inspiration for design possibilities by enjoying our Master Class How to Innovate with XR with Michael Nebeling, Associate Professor, University of Michigan.

What is the difference between SLAM and GPS?

SLAM builds its own map of the environment and keeps track of position using onboard sensors. It works accurately indoors or in areas without GPS.

GPS, by contrast, retrieves location from satellite signals and does not create maps or work well indoors or in rapid motion or occlusion conditions. SLAM is self-contained—data comes from what the device sees or senses—while GPS depends on external infrastructure and supplies only coordinates, not an environmental model.

GPS, by contrast, provides location coordinates from satellite signals with accuracy typically ranging from 4.9 meters (16 feet) for smartphones under open sky to ≤1.82 meters (5.97 feet) for high-quality receivers. Even when GPS signals are available, GPS may perform poorly in indoor venues owing to signal attenuation, shadowing and multipath fading imposed by indoor concrete and metallic structures.

GPS is fundamentally limited by its coarse accuracy compared to local sensors—while enhanced GPS systems like RTK can achieve centimeter-level accuracy, standard GPS provides only meter-level positioning. Also, GPS supplies only coordinates without creating environmental maps, whereas SLAM is self-contained—generating both positional data and detailed environmental models from what the device directly observes.

Delve deeper into this fascinating area in our article Getting Lost and Found – Maps and the Mobile User Experience.

How is SLAM used in augmented reality (AR) and mixed reality (MR)?

AR and MR rely on SLAM algorithms to track camera pose and map environmental features, enabling virtual objects to be spatially registered with the real world. SLAM systems identify keypoints and use sensor fusion to estimate depth and 3D structure, allowing digital content to appear anchored to surfaces and properly occlude behind real objects.

This spatial consistency is crucial for immersive experiences—and absolutely critical in medical applications where tracking failures could lead to dangerous misalignment between virtual surgical guidance and actual patient anatomy, for example—potentially giving “cutting edge tech” an unwanted meaning if unintentional contact were to be made.

Explore further aspects of extended reality by enjoying our Master Class How To Craft Immersive Experiences in XR with Mia Guo, Senior Product Designer, Magic Leap.

What sensors are commonly used in SLAM systems?

SLAM systems typically combine multiple sensors for accuracy. Popular choices include cameras (monocular, stereo, and RGB-D) for visual SLAM, lidar for precise depth mapping, and inertial measurement units (IMUs) for estimating orientation and motion. Some systems also use sonar, radar, or even Wi-Fi signal strengths. Sensor fusion helps the system remain stable in varied environments.

Discover additional aspects of augmented reality in our article Innovate with UX: Design User-Friendly AR Experiences.

How does SLAM deal with dynamic environments (moving objects, people)?

Modern SLAM systems use specialized techniques to detect and manage dynamic elements to remain accurate. One common approach is dynamic object detection, where systems use visual cues, depth data, or semantic segmentation to identify and ignore moving objects.

Algorithms like DynaSLAM or Co-Fusion isolate static backgrounds while excluding dynamic regions from mapping. Sensor fusion—combining camera, IMU, and depth sensor data—helps track stable features while filtering out motion noise. Also, methods like RANSAC reject outlier points that behave inconsistently. Advanced systems also apply semantic SLAM, using AI to recognize object types (e.g., people or vehicles) and handle them appropriately. These innovations let SLAM function effectively in crowded or changing environments, supporting robust AR, autonomous navigation, and robotics in real time, an essential, forward-thinking design consideration that helps keep SLAM relevant to user needs and not the sound of a collision.

Get a greater grasp of how to accommodate user needs in design.

How do you design interactions that respond to real‑world surfaces using SLAM?

Designers can leverage the spatial maps of SLAM to attach virtual UI elements to physical surfaces. For instance, surfaces can become interactive touchpoints or anchors for virtual controls. To make interactions natural, designers should prioritize surface detection, ensure object occlusion so that virtual elements appear behind real-world objects when appropriate, and provide visual consistency when users move or reorient view. These practices reinforce spatial cohesion and boost immersion to levels that can safely delight.

Before any great design comes a prototype to build upon—enjoy our Master Class Rapid Prototyping for Virtual Reality with Cornel Hillmann, XR Designer, CG Artist and Author.

How can poor SLAM performance lead to bad user experiences in AR?

When SLAM underperforms, virtual objects may drift, float, or jitter. They may scale inconsistently or detach from real surfaces. These issues damage immersion and cause user frustration—misaligned content can appear unnatural, triggering motion sickness or breaking interaction patterns. Depending on the type of AR experience or consequences of actions taken on a misaligned object, results can be more damaging if something is there that is not supposed to be. Stable SLAM is essential for believable, comfortable AR experiences that do not end in problems.

Get more from AR design in our article How to Design for AR Experiences on the Go.

What are common challenges when building or using SLAM in apps?

Designers and developers face several SLAM challenges: high processing demands can drain batteries and slow performance; poor lighting or texture-less surfaces can degrade tracking; dynamic environments introduce moving occlusions; and sensor limitations or calibration issues can create drift or map inconsistencies.

Ensuring cross-device compatibility adds complexity—especially on mobile hardware with varying capabilities—and is another reason that the application of SLAM requires careful thought for applications users can enjoy in comfort.

Get a firmer foundation in the fineries and theoretical aspects to help build better SLAM-oriented experiences, in our article Beyond AR vs. VR: What is the Difference between AR vs. MR vs. VR vs. XR?.

How accurate is SLAM and what affects its accuracy?

SLAM accuracy depends on sensor quality, algorithm robustness, and the consistency of the environment. Rich textures and structured environments (walls, distinct features) improve feature matching and mapping. High-quality cameras or lidar boost accuracy, helping matters greatly.

Meanwhile poor lighting, repetitive patterns, or dynamic scenes harm accuracy. Loop closure techniques—recognizing previously visited locations—can correct positional drift over time.

Find helpful insights about spatial design the AR way in our article Spatial UI Design: Tips and Best Practices.

Can I use SLAM on mobile devices or wearables in real time?

Yes, modern mobile devices such as AR-capable smartphones and wearables integrate SLAM with real-time performance. They combine cameras, IMUs (inertial measurement units), and efficient Visual SLAM algorithms to map indoor spaces and support immersive AR experiences.

These on-device implementations enable real-time interaction—however, note that performance depends on hardware specs and optimization.

Go further into AR and consider important aspects about how to design for it, in our article Cross-Device AR: How to Ensure a Seamless Experience.

What are some helpful resources about SLAM for UX designers?

Jakl, A. (2018, September 21). Basics of AR: SLAM – Simultaneous localization and mapping. Andreas Jakl Blog. https://www.andreasjakl.com/basics-of-ar-slam-simultaneous-localization-and-mapping/

This detailed technical blog post explains SLAM concepts specifically within the context of ARKit, ARCore, and HoloLens implementations. Jakl breaks down complex algorithms into understandable components, explaining feature extraction, keypoint tracking, loop closure detection, and uncertainty handling in SLAM systems. The post addresses practical limitations UX designers encounter, such as drift correction, tracking loss, and environmental requirements for reliable SLAM performance. This resource helps UX designers understand why AR applications behave certain ways and how to design experiences that work within SLAM technical constraints, making it invaluable for evidence-based AR design decisions.

Wilson, T. (2018, June 15). The principles of good UX for augmented reality. UX Collective. https://uxdesign.cc/the-principles-of-good-user-experience-design-for-augmented-reality-d8e22777aabd

This practical design article addresses UX principles specifically for AR applications, with particular attention to spatial interaction challenges that SLAM technology presents. Wilson discusses comfortable interaction zones, spatial UI placement, and how tracking limitations affect user interface design decisions. The article provides actionable guidance on designing within SLAM constraints, considering device ergonomics and tracking accuracy when placing virtual objects in real space. For UX designers, this resource offers concrete design recommendations for creating usable AR experiences that account for SLAM tracking limitations and spatial computing.

Tourani, A., Bavle, H., Sanchez‑Lopez, J. L., & Voos, H. (2022). Visual SLAM: What Are the Current Trends and What to Expect? arXiv.

Tourani and colleagues survey state-of-the-art VSLAM systems, covering algorithm types, sensor variants (monocular, stereo, RGB‑D), and performance across real-world datasets. UX designers can use this knowledge to set expectations for visual tracking, environment diversity, and device limitations in AR/MR interfaces.

Earn a Gift, Answer a Short Quiz!

  1. Question 1
  2. Question 2
  3. Question 3
  4. Get Your Gift

Question 1

What does SLAM stand for in the context of augmented reality and robotics?

1 point towards your gift

Question 2

Which of the following describes a fundamental step in the SLAM process?

1 point towards your gift

Question 3

Which of the following is an application of SLAM outside of augmented reality?

1 point towards your gift

Try Again! IxDF Cheers For You!

0 out of 3 questions answered correctly

Remember, the more you learn about design, the more you make yourself valuable.

Improve your UX / UI Design skills and grow your career! Join IxDF now!

  1. Question 1
  2. Question 2
  3. Question 3
  4. Get Your Gift

Congratulations! You Did Amazing

3 out of 3 questions answered correctly

You earned your gift with a perfect score! Let us send it to you.

Letter from IxDF

Check Your Inbox

We've emailed your gift to name@email.com.

Improve your UX / UI Design skills and grow your career! Join IxDF now!

Literature on Simultaneous Localization and Mapping (SLAM)

Here's the entire UX literature on Simultaneous Localization and Mapping (SLAM) by the Interaction Design Foundation, collated in one place:

Learn more about Simultaneous Localization and Mapping (SLAM)

Take a deep dive into Simultaneous Localization and Mapping (SLAM) with our course UX Design for Augmented Reality .

It's Easy to Fast-Track Your Career with the World's Best Experts

Master complex skills effortlessly with proven best practices and toolkits directly from the world's top design experts. Meet your expert for this course:

  • Frank Spillers: Service Designer and Founder and CEO of Experience Dynamics.

All open-source articles on Simultaneous Localization and Mapping (SLAM)

Please check the value and try again.

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!

Share Knowledge, Get Respect!

Share on:

or copy link

Cite according to academic standards

Simply copy and paste the text below into your bibliographic reference list, onto your blog, or anywhere else. You can also just hyperlink to this page.

Interaction Design Foundation - IxDF. (2023, August 30). What is Simultaneous Localization and Mapping (SLAM)?. Interaction Design Foundation - IxDF.