Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and the development of scripts to support automated regression testing. To keep pace with software evolution, test artifacts must also be frequently updated. Although test automation practices help mitigate the cost of regression testing, a large gap exists between the current paradigm and fully automated software testing. Researchers and practitioners are realizing the potential for artificial intelligence and machine learning (ML) to help bridge the gap between the testing capabilities of humans and those of machines. This paper presents an ML approach that combines a language specification that includes a grammar that can be used to describe test flows, and a trainable test flow generation model, in order to generate tests in a way that is trainable, reusable across different applications, and generalizable to new applications.