When I try to use the ‘Run metrics when’ feature in BehaviorSpace, the results for the last tick are not recorded. Specifically, when I enter “ticks = 1000”, the only time the results are recorded are when ticks = 1000. If the run does not last 1000 ticks, nothing is recorded; if the run lasts more than 1000 ticks, the only results I see are for when ticks = 1000.
p.s. I tried to upload the model file and sample output, but received a message indicating that new users are not allowed to upload files.
I created an experiment with “Run metrics when” set to “ticks = 1000” and “Time limit” set to “2000”, and in the spreadsheet output, I see metrics reported at both 1000 and 2000 ticks. If you can’t upload a file, could you describe how to construct a minimal reproducible example? I might have slightly different conditions in my example than you did in yours. Thanks,
Thanks for getting back to me. I tried using something other than 0 for “Time limit”, but again without success. I also tried to upload the model file and sample output, but again was informed that new users are not allowed to upload files. I do not know of an easy way to describe how to construct a minimal reproducible example. Perhaps you could send me a copy of the model you used to run your experiment and that will help me. In the meantime, I will see if there is a way I can contact the moderators of the forum to see if I can find a way to upload my files.
I just received a message indicating that I can now upload files. I attach the model file, which includes an experiment, and the results comparison for the cases I mentioned in my first post.
Thank you for sending an example model! I ran the model on the released version of NetLogo 6.4.0, and experienced a similar bug to you. But thankfully, when I ran the example on the current development version of NetLogo, it worked as expected! This means that the bug has been fixed since the last release, but unfortunately you won’t be able to get the update on your computer until the next release. In the meantime, my best suggestion is to explicitly add the ending tick numbers to your “Run metrics when” condition, since from my local testing it seems that the ending tick number is consistent for each run in the model you sent. I’m sorry that there isn’t an easy immediate fix, but we appreciate your bug report! Let me know if you have any questions,
Thanks. If it is going to be some time before the next release, it would be nice if a note was added to the documentation explaining this bug so that it does not trip up others.
In the meantime, I found a different workaround for now. I run the experiment once with no time limit and once with the time limit set to 1000. This is, obviously, not ideal, but that is the best I can do for now. The only reason I can imagine that you would have seen the same ending tick for each model run is that the model uses a seed and in your various model runs no parameters, including the seed, were changed. If any of the parameters change, so will the ending tick in most runs.