Code testing tools

Our recent interviews with NetLogo users identified tools for testing code as a development need. And recent literature on good practices in agent-based modeling identifies the lack of code testing, or even a culture of testing, as a major concern–a sign that agent-based modeling is not yet a mature approach to science. Building tools for testing code into NetLogo would make it easier to test code, and would promote a code-testing culture by reminding users that code is not finished until it’s tested.

Here I describe an idea I have for a code testing tool, and I look forward to seeing other ideas.

I often write custom code to test a particular submodel by executing the submodel over wide ranges of values for several different inputs. For example, I might test a fish growth submodel using a test procedure that writes to file the growth rate calculated by the submodel over wide ranges of food intake, swimming speed, temperature, and fish size. Then I analyze that output in Excel (etc.) to search for errors.

I can envision a tool somewhat like BehaviorSpace that would produce this kind of test output. It would let users:

  1. Create local variables that identify specific turtles and/or patches, for example:
  • let a-fish one-of turtles
  • let fishs-patch [patch-here] of a-fish
  1. Create other local variables that are used as input to the code being tested, and specify ranges of values of those local variables (or observer variables). The BehaviorSpace syntax for specifying ranges of values could be used. For example:
  • let fish-lengths [5 5 25]
  • let patch-foods [0 1 20]
  • let temperatures [0 2 30]
  1. Provide a block of code that updates the submodel for each combination of input variable values:
  • ask a-fish [set length fish-lengths]
  • ask fishs-patch [set food patch-foods]
  • set temperature temperatures ; temperature is an observer variable
  • ask a-fish [update-food-intake] ; execute a procedure affecting growth
  1. Specify the outputs to be reported for each combination of input values, such as:
    [length] of a-fish
    [food] of fishs-patch
    temperature
    [growth] of a-fish ; A turtle reporter that calculates growth

Then of course the user could execute the tester so it creates an output file with all the results. The output file could also produce metadata documenting exactly what code was executed how.

1 Like

Great idea! I wonder if we could start by building an extension/library for this?

I agree this is a great idea. It seems what you are proposing here is an easier way to do meso- and macro-level testing as per the framework in this paper: A generic testing framework for agent-based simulation models | Journal of Simulation

I totally agree and also think we need better tools and best-practices for unit-testing as well (micro-level testing).

There is also work on “property-based testing” that we should look into and potentially build tools for: Specification testing of agent-based simulation using property-based testing | Autonomous Agents and Multi-Agent Systems

Regarding John’s point, I think all of these things could be done as extensions.

I have now drafted two concepts for code testers. One would be BehaviorSpace-like and test a piece of code over wide ranges of selected variables that affect it. The second would be an extension (perhaps) that records the values of selected variables at the start and end of a selected procedure. The NetLogo development team has these concept descriptions.

1 Like

What do you mean by the “concept descriptions” we have?

1 Like