Scalable Solvers for Computational Science and Data Science: Multilevel, Nonlinearly Preconditioned, and Parallel-in-Time
In the current era of big data and big computational models, numerical methods of iterative type are crucial algorithmic building blocks for tackling large-scale problems from scientific computing and data analysis. Bridging the two worlds of computational science and data science, this talk will discuss recent advances in scalable iterative solvers for such problems, highlighting synergies in ideas and approaches between the two worlds. We will first extend well-known ideas of linear preconditioning for systems of algebraic equations to the setting of nonlinear preconditioning for optimization methods (e.g., LBFGS and Nesterov's method), dramatically speeding up convergence for data analysis applications such as tensor decomposition and recommendation. Next we will discuss how recursive preconditioning on multiple levels may lead to efficient parallel-in-time integration for PDEs from computational science. Spatial parallelism alone quickly saturates on contemporary fast computers that may have up to millions of cores due to stagnating processor clock speeds, and parallelism in time can help overcome this bottleneck. Challenges for applying these techniques to hyperbolic PDEs that model wave propagation will be discussed. We will conclude with some natural and some surprising combinations of these ideas across the two worlds. For example, parallel-in-time ideas can be applied to data analysis problems, and acceleration ideas can be combined with stochastic optimization methods and randomization.