Distributed Learning with Neural Networks
An allocation is sought to support research on large-scale statistical learning using neural networks. Statistical learning with neural networks—often referred to as "deep reinforcement learning"—has recently emerged as one of the most successful areas of machine learning. The neural network is able to learn directly from data a model for a system's dynamics as well as the optimal strategy/control to achieve a desired objective. This is very useful for applications where no fundamental physical laws (e.g., Newton's law) are known. Learning often requires large neural network models—even millions of parameters—and a large amount of data (terabytes). To address the significant computational expense, we propose to distribute learning of models over many computational nodes on Blue Waters. This allocation would specifically support two research projects, with the first being an application and the second being fundamental machine learning research: (1) An application of statistical learning with neural networks to optimal execution with limit order book data and (2) development of new optimization methods for training neural networks in deep reinforcement learning via leveraging parallelization.