{"id":366,"date":"2020-09-03T06:34:59","date_gmt":"2020-09-03T06:34:59","guid":{"rendered":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/?page_id=366"},"modified":"2020-10-22T13:50:02","modified_gmt":"2020-10-22T20:50:02","slug":"schedule","status":"publish","type":"page","link":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/schedule\/","title":{"rendered":"Schedule"},"content":{"rendered":"<h2>\n\t\tSchedule of Events\n\t<\/h2>\n\tPLENARY SESSION 1<br \/>\n<em>Please note all times below are PDT (U.S. Pacific Daylight Time)<\/em>\n\t<p>8:30 AM<\/p>\n\t<strong>Pramod Khargonekar<br \/>\nVice Chancellor for Research, UCI<\/strong><br \/>\nIntroductions\n\t<p>8:45 AM<\/p>\n\t Keynote Address<br \/>\n<strong>Stanley Osher<br \/>\n<\/strong><em>Innovations in Mean-Field Game Theory for Scalable Computation and Diverse Applications<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Mean field games play essential roles in AI, 5G communications, unmanned aerial vehicle path planning, social norms, and controlling natural disasters, such as COVID 19. In this talk, we present several results by our MURI team in the year 2019-2020. We designed fast and reliable numerical algorithms with connections to AI and machine learning, and formulated models in mean-field inverse problems, velocity control for massive rotary-wing UAV\u2019s, controlling COVID 2019 pandemic spreading, etc. Several numerical examples and engineering experiments will be presented. Future directions will be discussed. This is based on a joint work with many people at UCLA, University of South Carolina (Wuchen Li who just moved to UofSC), University of Houston, and Princeton University.<\/p>\n\t<p>9:30 AM<\/p>\n\t<strong>Kipton Barros<\/strong><br \/>\n<em>Automated discovery of a robust interatomic potential for aluminum<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Machine learning is emerging as a powerful tool for emulating electronic structure calculations. I will discuss recent work in building interatomic potentials relevant to chemistry, materials science, and biophysics applications. A key idea is active learning, in which the training data is iteratively collected to address weaknesses of the ML model. This approach can achieve a surprising level of transferability, as will be demonstrated with a case study for elemental aluminum.<\/p>\n\t<p>10:00 AM<\/p>\n\t<strong>Kieron Burke<\/strong><br \/>\n<em>Machine learning for electronic structure calculations and a new approach to warm dense matter simulations<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Electronic structure calculations using density functional theory have become common in many areas of science and technology. I will give a brief overview of the impact of machine learning in electronic structure theory over the past decade [1]. I will highlight the work at UC Irvine to employ machine learning to find new and better density functional approximations [2,3]. Toward the end, I will also explain a new methodology for improving simulations of warm dense matter that are relevant to the National Labs [4].<\/p>\n<p>References<\/p>\n[1] von Lilienfeld, O.A., Burke, K. Retrospective on a decade of machine learning for chemical discovery. Nat Commun 11, 4895 (2020).<br \/>\n[2] Bogojeski, M., Vogt-Maranto, L., Tuckerman, M.E. et al. Quantum chemical accuracy from density functional approximations via machine learning. Nat Commun 11, 5223 (2020).<br \/>\n[3] Kohn-Sham equations as regularizer: building prior knowledge into machine-learned physics L Li, S Hoyer, R Pederson, R Sun, E Cubuk, P Riley, K Burke, arXiv:2009.08551<br \/>\n[4] Bypassing the energy functional in density functional theory: Direct calculation of electronic energies from conditional probability densities R J. McCarty, D Perchak, R Pederson, R Evans, Y Qiu, S R. White and K Burke, 2007.01890\n\t<p>10:30 AM<\/p>\n\t<p> Lighting Talks (Video Presentation)<\/p>\n\t<strong>\u00a0[ 15 Minute Break ]<br \/>\n<\/strong>\n\t<p>11:15 AM<\/p>\n\t<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-content\/uploads\/sites\/10\/2020\/09\/present-40.png\" alt=\"\" width=\"25\" height=\"25\" \/> Poster Session<\/p>\n\t<strong>\u00a0[ 15 Minute Break ]<br \/>\n<\/strong>\n\t<p>PLENARY SESSION 2<\/p>\n\t<p>12:30 PM<\/p>\n\t<strong>Brian Spears<br \/>\n<\/strong><em>Cognitive Simulation: Combining simulation and experiment with artificial intelligence<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Large-scale scientific endeavors often focus on improving predictive capabilities by challenging theory-driven simulations with experimental data. Yet, both simulation and experiment have become overwhelmingly rich, with a complex of observables including scalars, vector-valued data, and various images.\u00a0 At Lawrence Livermore National Laboratory (LLNL), we are using modern artificial intelligence (AI) technologies to combine predictive simulation models with rich experimental data.\u00a0 We call this set of methods cognitive simulation (CogSim).\u00a0 We will describe a strategic LLNL research effort aimed at using recent advances in deep learning, computational workflows, and computer architectures to develop improved predictive models.<\/p>\n<p>We will present our progress toward our CogSim vision using work from a wide range of applications, including inertial confinement fusion research at the world\u2019s largest laser, the National Ignition Facility (NIF).\u00a0 We will describe advances in multi-modal machine learning architectures, state-of-the-art tools for uncertainty quantification and model training at enormous scale, and efforts to explore next-generation computational platforms.<\/p>\n\t<p>1:00 PM<\/p>\n\t<strong>Stephan Mandt<\/strong><br \/>\n<em>Machine Learning and Physics: Bridging the Gap<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Physicists and machine learners have much in common but speak different languages. Both communities deal with high dimensional state spaces and build sophisticated approximation schemes to develop tractable models. In the first part of my talk, I will show how (1) Bayesian matrix factorization methods can be used to improve over physics-based modeling of thermodynamic properties fluid mixtures in chemical engineering and how (2) variational autoencoders can help reveal hidden patterns in climate science data sets. As an example of how physics can influence machine learning, I will talk about spontaneous symmetry breaking in time series models and an improved learning algorithm, Goldstone gradient descent.<\/p>\n\t<strong>\u00a0[ 15 Minute Break ]<br \/>\n<\/strong>\n\t<p>1:45 PM<\/p>\n\t<strong>Gowri Srinivasan<\/strong><br \/>\n<em>Combining Graph Theory and Machine Learning to Characterize Fractured Systems<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>Fractured systems are ubiquitous in natural and engineered applications as diverse as hydraulic fracturing, underground nuclear test detection, corrosive damage in materials and brittle failure of metals and ceramics. Microstructural information (fracture size, orientation, etc.) plays a key role in governing the dominant physics for these systems but can only be known statistically. Current models either ignore or idealize microscale information at these larger scales because we lack a framework that efficiently utilizes it in its entirety to predict macroscale behavior in brittle materials. We propose a method that integrates computational physics, machine learning and graph theory to make a paradigm shift from computationally intensive high-fidelity models to coarse-scale graphs without loss of critical structural information. We exploit the underlying discrete structure of fracture networks in systems considering flow through fractures and fracture propagation. We demonstrate that compact graph representations require significantly fewer degrees of freedom (dof) to capture micro-fracture information and further accelerate these models with Machine Learning. Our method has been shown to improve accuracy of predictions with up to four orders of magnitude speedup.<\/p>\n\t<p>2:15 PM<\/p>\n\t<strong>Eric Mjolsness<\/strong><br \/>\n<em>AI approaches to graph dynamics for multiscale computational science<\/em>\n\t\t\t<a href=\"#\" id=\"fl-accordion--label-0\" tabindex=\"0\" aria-controls=\"fl-accordion--panel-0\">Abstract<\/a>\n\t\t<p>A key operation in multiscale science is the mathematical change of scale under which many fine-scale variables get replaces with fewer coarse scale variables in an approximate, emergent dynamics. This inter-scale relationship has two important points of interaction with artificial intelligence: (a) it may be learned from fine-scale simulation data, and (b) it may require a substantial change of problem representation up to and including the introduction of dynamic graphs: collections of labelled nodes (vertices with spatial positions) and their connecting links (edges). I will briefly introduce three methods that point in these directions: [1] A combination of algebraic multigrid methods with graph neural networks applied to microtubule biomechanics; [2] A \u201cDynamic Boltzmann Distribution\u201d method for learning model reduction of stochastic spatial biochemical networks; and [3] a meta-language for stochastic spatial graph dynamics, Dynamical Graph Grammars, which has an underlying theory related to operator algebras which may be understood combinatorially.<\/p>\nReferences<br \/>\n[1] C.B. Scott and Eric Mjolsness. In IOP Machine Learning: Science and Technology, accepted MS <a href=\"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abb6d2\">https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abb6d2<\/a>, August 2020.<br \/>\n[2] Ernst, Bartol, Sejnowski, Mjolsness, Phys Rev E 99 063315, June 2019.<br \/>\n[3] EM, \u201cStructural Commutation Relations for Stochastic Labelled Graph Grammar Rule Operators\u201d, arXiv: <a href=\"http:\/\/arxiv.org\/abs\/1909.04118\">http:\/\/arxiv.org\/abs\/1909.04118<\/a>, August 2019.\n<\/blockquote>\n\t<p>2:45 PM<\/p>\n\t<strong>John Sarrao<\/strong><br \/>\n<strong>Deputy Director, Science, Technology, and Engineering, LANL<\/strong><br \/>\nClosing Remarks\n","protected":false},"excerpt":{"rendered":"<p>Schedule of Events PLENARY SESSION 1 Please note all times below are PDT (U.S. Pacific Daylight Time) 8:30 AM Pramod Khargonekar Vice Chancellor for Research, UCI Introductions 8:45 AM Keynote Address Stanley Osher Innovations in Mean-Field Game Theory for Scalable Computation and Diverse Applications Abstract Mean field games play essential roles in AI, 5G communications, [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"site-sidebar-layout":"default","site-content-layout":"page-builder","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"disabled","ast-breadcrumbs-content":"","ast-featured-img":"disabled","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-366","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/pages\/366","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/comments?post=366"}],"version-history":[{"count":12,"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/pages\/366\/revisions"}],"predecessor-version":[{"id":1136,"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/pages\/366\/revisions\/1136"}],"wp:attachment":[{"href":"https:\/\/sites.research.uci.edu\/frontiers-machine-learning\/wp-json\/wp\/v2\/media?parent=366"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}