NetLogo AR: Introduction

NetLogo AR

What Is It

NetLogo AR is a spatial Augmented Reality (AR) authoring toolkit that combines room-scale AR technology with NetLogo. It is freely distributed as a part of Turtle Universe, the mobile version of NetLogo. NetLogo is an agent-based programming language widely used in scientific research and education. It enables the creation of room-scale multi-agent models, simulations, artworks, or games with existing NetLogo grammar. It also enables the seamless blending of existing NetLogo models into the physical world around you.

The following figure demonstrates a sample transition included in this repository:

The following video provides a quick overview of the technical system:

NetLogo AR Video

If you have any questions or need technical support, please send your thoughts to NetLogo’s Official Forum. If you find any bugs when using the system, please raise an issue here.

Supported Modalities

NetLogo AR currently supports three modalities:

  • Room-scale AR: In this modality, NetLogo AR will attempt to recognize your physical surroundings (e.g., walls, doors, tables, chairs) at a room-scale. By default, it will attempt to visualize the physical items as semi-translucent boxes. They will also be mapped to the model as polygons or lines that can interact with NetLogo agents.
  • Plane-based AR: In this modality, NetLogo AR will attempt to recognize planes in your physical surroundings (e.g., walls, floors, tables) as polygons. The outline of the polygon will also be mapped to the model as lines that can interact with NetLogo agents.
  • Non-AR: In this modality, since the device cannot acquire information directly from the physical surroundings, NetLogo AR supports loading from an existing save. A save can be exported from a supported device (e.g. a LIDAR-equipped iPad or iPhone). Turtle Universe also embeds a scan that will be loaded by default.

Users can switch between (room-scale AR or plane-based AR) and non-AR at will. Switching between Room-scale AR or Plane-based AR requires a re-scan. Models created in any modality can be loaded from other modalities as needed.

Hardware Requirement

NetLogo AR is supported on all devices that Turtle Universe supports, which includes iOS, Android, Windows, macOS, and Chromebook. All devices can create a room-scale AR experience that works on all devices. However, only some devices support the visualization to the fullest extent. Here is a simplified matrix of NetLogo AR’s device-modality compatibility:

Device / Modalities Room-scale AR Plane-based AR Non-AR (Topdown View)
iPad & iPhone with LIDAR Yes Yes Yes
iPad & iPhone without LIDAR No Yes Yes
Android with ARCore No Yes Yes
Windows, macOS, Chromebooks No No Yes

Frequently Asked Questions

Q: How can I know if my iOS device has LIDAR?
A: Apple provides a list of LIDAR-enabled devices here.
Q: Does NetLogo AR support VisionOS?
A: We do have plans for VisionOS, but do not have a device at this moment. If you have one, please let us know.
Q: How can I know if my Android device has ARCore?
A: If you see an “AR” button on the top-left in any models opened with Turtle Universe, your Android device likely has ARCore support.
Q: How do you install Turtle Universe on Chromebook?
A: You need to enable Google Play support on your Chromebook.

How To Use It

Try It Yourself

Before making a NetLogo model with NetLogo AR, consider playing with it first. We have made a few examples that you can play in Turtle Universe. Both models are open-sourced that could help you understand how it works. If you make or see an interesting AR model, please let us know. We are happy to include your model!

  • Ants (AR): The AR version of the classical Ants model. The main difference is that ants interact with obstacles in the physical world, and you can play as an ant with infinite or limited vision.
  • Balls AR: An AR model that contains multiple balls (circles) doing random movement in the physical space. The balls will collide into real-world elements and each other. You can play as one ball in the world, with an option to attract other balls.

Create an AR model

Essentially, a NetLogo AR model is a NetLogo model using XR and phys extensions.

The XR extension provides primitives that allow you to convert a real-world scan into your NetLogo world. Specifically:

  • xr:room-scan initiates the scan and calls back when the scan is complete. The user might load an existing scan, which also counts. This allows the model to run on non-AR devices.
  • xr:resize-world automatically resizes the world based on the real-world dimensions. You can specify a minimum world size as an optional parameter.
  • xr:iterate-as-patches allows you to iterate through scanned items (walls, tables, etc) as patches, so you can mark each patch as “impassable” or with a special flag.
  • xr:iterate-as-turtles does the same, but could be used to create physical turtles.

For example, the following code performs a setup-like action when the user finishes a scan:

Then, use the phys extension to create physical turtles with polygon shapes. For example, the following code converts scanned walls, windows, and doors into edges and other items as polygons:

Finally, use xr:xcor and xr:ycor to trace the world coordinates of the user. For example, the following code makes the red turtle follow the user’s position:

Until here, you will have a skeleton AR model that spawns 50 balls, with one of them constantly follows the user’s location. Note that the behavior of xr:xcor and xr:ycor differs in different AR modalities:

  • In room-scale AR, it points to the user’s real-world location mapped in the model space.
  • In plane-based AR, it points to where the device’s camera center is pointed, mapped in the model space.
  • In non-AR, it points to where the user’s viewport center is, mapped in the model space.

License

The documentation and official samples in this repository adopt an MIT license. Individual authors, including children, own their respective models.

The AR and Physics extensions of NetLogo Web, which serves as the infrastructure of NetLogo AR, adopt an MIT license. Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at [email protected].

Citation Format

If you use NetLogo AR in your academic work, please cite in the following format:

  • Chen, J. & Wilensky, U. (2023). NetLogo AR. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.

You may also be interested in the papers we published, which describe the way we use NetLogo AR with children:

  • Chen, J., Horn, M., & Wilensky, U. (2023, June). NetLogo AR: Bringing Room-Scale Real-World Environments Into Computational Modeling for Children. In Proceedings of the 22nd Annual ACM Interaction Design and Children Conference (pp. 736-739).
  • Chen, J., Zhao, L., Li, Y., Xie, Z., Wilensky, U., & Horn, M. S. (2024, May). “Oh My God! It’s Recreating Our Room!” Understanding Children’s Experiences with A Room-Scale Augmented Reality Authoring Toolkit. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems.

Hey John, any chance this might make a leap to a platform like oculus 3 (potentially with 3d rendering?)

Yes, that should be fairly straightforward, as Turtle Universe is based on Unity3D and has a 3D rendering mode (essentially close to NetLogo 2.5D). I used to develop with those VR headsets some 5-6 years ago, and hopefully, nothing has changed fundamentally.

I guess I need to send a separate thread on Turtle Universe here :wink:

yes, please! And I would love to know if the code for that is accessible so I could take a look.

I just checked our GitHub repos - unfortunately Turtle Universe is not fully open-sourced for now. If you are interested, we can start with the Javascript layer, which has full inline documentation, but I didn’t write a readme. I should have done this for long, but as a PhD student, there are too many obligations :frowning:

I think that it would be very exciting to be able to (a) run on Quest 3, and (b) be able to respond to/integrate the “topography” of the room. As one example, read “altitude” over the floor, and map this as a patch variable (e.g., ‘altitude’) in the model. With just that you could create a hydrology model that responded to mounds of clothing in the room. A next step would allow gestural interaction (‘pouring water’) on the mountaintop.

Of course, this seems involved enough (and cool enough) to merit a grant proposal. Anyone interested?

I am 100% interested!! Do you want to find a time to talk about it?

Yes, I agree that we may need to add a 2.5D mode and enable the 3D scan to take the altitude data. Both shall be pretty straightforward and achievable. We don’t even need much money to achieve that, but that will add a lot to this.

sounds fun. It would be great to get a ‘proof of concept’ together to make any search-for-funding concrete. I think that the Quest3 is good because it’s generally understandable - but I also am building an ‘ar room’ where I’ve got cameras that render the 3d space and do body tracking throughout. If we had NetLogo responding to room features, we could have a reactive simulation without headsets.

Sure, that’s what the current version of NetLogo AR already does - it detects the boundary of the room and some furniture and allows models to interact with it. It works without a headset and only on LIDAR-equipped tablets. In that sense, it is already a proof-of-concept and we ran it with children last year. Plus, it works on arbitrary room with a very brief configuration phase that children can run themselves (and they love the scan, too).

We might be able to pull off one more feature to get the proof-of-concept more complete: retrieve the altitude from the 3D scan. It is already supported to a lesser degree, since RoomScan does provide a height for objects it detects (but not, say, moulds on the ground, for now). Potentially we could also do body tracking and collaborative sessions, but they take some more dev time.

Nice. Love the idea of using the room scan to read in altitude variation as a sim-accessible variable.

RE: future work on other ways of scanning the room - it sounds like I wanted to integrate my “smart room” I would bring that model into NetLogo AR in some way that was AS THOUGH it was coming from the device’s internal LIDAR. Does that sound right to you?

Sounds reasonable & very doable. NetLogo AR is built to accept different data streams and modalities. So you can send your external sensor data to computers, tablets, etc., all devices that we have support. The alignment may be a minor concern but could be done with a simple QR code scan (to align the coordinate systems for rendering).

Another cool thing is that once models are built, they can be used elsewhere - for example, one can currently scan a room and export it for another computer (without any sensor) to load from. The model will perform in the same way.