View count:
235
Program
8:40-8:55 | Registration |
8:55-9:00 | Opening Remarks |
9:00-9:50 |
Ying-Jer Kao Title: Generation of ice states through deep reinforcement learning Abstract: We present a deep reinforcement learning framework where a machine agent is trained to search for a policy to generate a ground state for the square ice model by exploring the physical environment. After training, the agent is capable of proposing a sequence of local moves to achieve the goal. Analysis of the trained policy and the state value function indicates that the ice rule and loop-closing condition are learned without prior knowledge. We test the trained policy as a sampler in the Markov chain Monte Carlo and benchmark against the baseline loop algorithm. This framework can be generalized to other models with topological constraints where generation of constraint-preserving states is difficult. |
9:50-10:40 |
Fu-Jiun Jiang Title: Phase transitions of two-dimensional and three-dimensional Potts models: a case study using supervised Neural Networks Abstract: Using the techniques from Neural Networks (NN), we investigate the phase transitions of two-dimensional and three-dimensional Q-states Potts models. Unlike in the conventional NN approach, our studies consider the expected ground state configurations as the training sets. Notably, with the built NN, designed training sets, and magnitude of the output vectors, we are not only able to calculate the critical temperatures of the investigated models accurately, but can also determine the nature of the phase transitions, namely whether they are of the first order or second order. Our method has the advantage that no prior knowledge of critical points is required to study the phase transitions of the systems. |
10:40-11:00 | Break |
11:00-11:50 |
Ming-Chiang Chung Title: Deep learning of topological phase transitions from entanglement aspects Abstract: Served as a good example of topological phase transitions without symmetry breaking, a p-wave superconductor proposed by Kitaev has been well studied through different aspects, for instance, Berry phase, Chern numbers and edge states of open chains, {\it ect.}. Especially from the quantum information point of view, the ground state properties have been analytically or numerically investigated by using either entanglement spectra (ES) or entanglement eigenvectors (EE). In order to understand the amount of informations carried in those quantities, we study 1D topological phase transitions via the deep learning approach. In this work, we use different quantities as inputs such as Majorana correlation matrices (MCMs) or Block correlation matrices (BCMs) related to ES and EE into the deep learning process, and examine which quantity could be used as the most useful and appropriate input format from the deep learning point of view. We find that ES is indeed a too compressed information compared to MCM or EE. MCM and EE can give us enough infos not only to recognize the topological phase transitions but also phase of matter with different $U(1)$ gauges, while the ES can only find the phase transitions, but is not able to distinguish trivial phases with different gauges. |
11:50-13:20 | Lunch |
13:20-14:10 |
Fumin Wang Title: Better Intelligence if and only if Better Compression Abstract: In Hutter's theory of AGI, he proved that finding the optimal behavior of a rational agent is equivalent to compressing its observations. Essentially he proved Occam's Razor, the simplest answer is usually the correct answer. In this talk, we provide a compression program that is stronger than zip, 7z, and tar.gz, and show how it can be used to make intelligent decisions. In particular, we apply it to DNA codes and use the resulting complexity measures to reconstruct the phylogeny tree of the mammalian class. |
14:10-15:00 |
Chun-Wei Pao Title: Machine Learning and Atomistic Simulation of Materials Abstract: Atomistic simulations such as first principle calculations or classical molecular dynamics simulations have been widely employed in studying properties of materials to provide insights difficult to be revealed from experiments or to predict properties of novel materials. However, first principle calculations suffer from the drawback of simulation size limitations, whereas the accuracy of classical force fields limit the application of classical molecular dynamics simulations. With the advances of machine learning, models trained from a pool of material data can predict material properties with much less computational time consumed relative to first principle calculations. In this talk, we will present our recent progresses in training a neural network potential of a number of materials with multiple principle components such as complex perovskite and high entropy alloys. We will demonstrate that the neural network potential offers a hundred thousand times computational boost comparing with density functional theory calculations using VASP, while retaining the accuracy in predicting energies to perform large-scale atomistic simulations. |
15:00-15:20 | Break |
15:20-16:10 |
Chun-Chung Chen Title: Artificial neural networks in statistical modeling of dynamical systems Abstract: Dynamical data from neural recording of brains and from simulations of spiking neuron networks can be studied with statistical models such as the pair-wise-coupled spin-glass model. The statistical properties of the model systems allow characterization of system states with thermodynamic phases and predictions of the neural dynamics using statistical mechanics. However, common approaches to find the parameters that map the measurements to the models generally rely on the iterative Boltzmann learning (BL) that can be computationally costly and scale poorly with the system size. To speed up the process for a broader application of the approach, various artificial neural networks, including convolutional neural networks (CNN) and restricted Boltzmann machines are used in conjunction with the BL. The speed up and range of applicability are assessed by applying the methods to randomly generated spin-glass systems. Treating the covariance matrix of the data as an image, a properly trained CNN works surprisingly well in several system cases considered. By using the calculation results from the iterations of BL as the training data for the CNN and using the CNN predictions as initial guess of the BL, we can alleviate the burden of finding a proper training data set beforehand for the CNN and speed up the subsequent BL processes with better initial guesses. |
16:10-17:00 |
Daw-Wei Wang Title: Deep Learning in the Large and Small Universes --- Identify Young Stellar Objects in Astronomy and Neuronal Polarity in Drosophila Brain Abstract: In this talk, I will briefly introduce our recent two research subjects using Artificial Neuronal Networks (ANN) in astronomy and neural sciences: One is to search for the Young Stellar Objects (YSO) from the Spectrum of Energy Distribution (SED) along. We show that the most important features and physical machine of a YSO comes in the long wavelength regime, where the observation errors are also the largest and hence not fully appreciated before. The other one is to identify the polarity of neuron cells in a Drosophila brain. Our Machine Learning model performs a much better result than previous works by giving high accurate prediction even for complex neurons. This paves the way to understand the neuronal signal flows inside the brain. From these two subjects, we show that ANNs can not only give accurate prediction based on big data without a priori theory, but also provide deeper insights for understanding our universes in different scales. |
17:00-17:10 | Closing |