Evolutionary Neural Networks Unveil Transition from Short to Long-Range Interactions with Minimal Evolutionary Pressures
ORAL
Abstract
Many physical systems have complex properties not found in their individual components. For instance, there is no crisis in a dollar bill nor there is intelligence in a single neuron. Studying multi-agent systems, however, provides insights into the origin of these emergent processes.
Our work focuses on collective motion, an emergent behavior observed in systems of agents that follow simple local rules. Some strategies to model such systems consist of using polarized agents that self-align with their neighbors while others use game theory. Yet, most of these models rely on the imposition of phenomenological rules. Conversely, some machine learning methods try to explain the origin of these rules, but they typically require stationary environments, which are rare in nature.
To circumvent these limitations, we propose a solution that employs the evolutionary training of neural networks. Our method consists of agents equipped with neural networks that evolve through evolution. This way, agents learn to improve some fitness functions like staying closer to each other. This approach has some clear benefits. For example, it mirrors a natural process where social animals learn through fear. Moreover, it favors adaptation regardless of the stationarity of the environment.
The results we obtained with this model suggest that the pressure to stay closer to one’s neighbors naturally leads to alignment. Moreover, distinct migration patterns like lanes, lines, swarms, fronts, wave patterns, and flocking can be obtained by adjusting parameters like field of vision, inertia, and noise. We also observed that tolerance to noise changes for different patterns, and this can be used for designing artificial social systems.
Our work focuses on collective motion, an emergent behavior observed in systems of agents that follow simple local rules. Some strategies to model such systems consist of using polarized agents that self-align with their neighbors while others use game theory. Yet, most of these models rely on the imposition of phenomenological rules. Conversely, some machine learning methods try to explain the origin of these rules, but they typically require stationary environments, which are rare in nature.
To circumvent these limitations, we propose a solution that employs the evolutionary training of neural networks. Our method consists of agents equipped with neural networks that evolve through evolution. This way, agents learn to improve some fitness functions like staying closer to each other. This approach has some clear benefits. For example, it mirrors a natural process where social animals learn through fear. Moreover, it favors adaptation regardless of the stationarity of the environment.
The results we obtained with this model suggest that the pressure to stay closer to one’s neighbors naturally leads to alignment. Moreover, distinct migration patterns like lanes, lines, swarms, fronts, wave patterns, and flocking can be obtained by adjusting parameters like field of vision, inertia, and noise. We also observed that tolerance to noise changes for different patterns, and this can be used for designing artificial social systems.
–
Presenters
-
Guilherme Giardini
Northern Arizona University
Authors
-
Guilherme Giardini
Northern Arizona University
-
Carlo R daCunha
Northern Arizona University
-
John F Hardy
Northern Arizona University