The evolution of natural life is guided by a perpetually adaptive set of rules, encompassing natural laws, human policies, and game mechanics. Automated game design, through the creation of simulated environments populated by AI agents, embodies these rules, aligning with the objectives of artificial life research that seeks to replicate the dynamics of biological life through computational models. This paper presents a comprehensive framework, the Rule Generation Networks (RGN), devised for automated rule design, evaluation, and evolution in line with controllable expectations. We refine and formalize three cardinal elements - rules, strategies, and evaluation - to elucidate the intricate relationships inherent in rule generation tasks. The RGN integrates generative neural networks for rule design and a suite of reinforcement learning models for rule evaluation. To exemplify rule evolution and adaptation across varying environments, we introduce a controllability metric to gauge game dynamics and evolve the rule designer accordingly. Furthermore, we develop two game environments, Maze Run and Trust Evolution, modelling human exploration and societal trade dynamics, to gamify and evaluate the generated rules.
cite
@ARTICLE{10323083,
author={Pu, Jiyao and Duan, Haoran and Zhao, Junzhe and Long, Yang},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
title={Rules for Expectation: Learning to Generate Rules via Social Environment Modelling},
year={2023},
volume={},
number={},
pages={1-1},
keywords={Games;Task analysis;Metaverse;Computational modeling;Biological system modeling;Reinforcement learning;Artificial intelligence;Rule generation;procedural content generation;artificial life;generative networks;reinforcement learning;automated game design},
doi={10.1109/TCSVT.2023.3334526}}