Javacript Library: Keras.js
Page 1 of 1
Javacript Library: Keras.js
Found this recently.
A Javascript Deep Learning Library. The image recognition looks pretty good for just .js.
https://transcranial.github.io/keras-js/#/
Related for Apache Spark BigDL:
https://bigdl-project.github.io/master/
A Javascript Deep Learning Library. The image recognition looks pretty good for just .js.
https://transcranial.github.io/keras-js/#/
Related for Apache Spark BigDL:
https://bigdl-project.github.io/master/
Re: Javacript Library: Keras.js
Interesting paper on Neural Networks and "Current Molecular/Electron Bonding (CMEB)" theory. Would be cool to throw their molecules/pics into Keras with Nevyn's MBL rendering engine in a x-y-z view and see their various models accuracy/specificity matrices compared with Mathis. It may be very hard to build up a predictable "spectrum" via incremental experimentation without Mathis' papers/insights -- however, this does look like a new long-term "research" gravy train though - the effort to keep building up a classic "Molecular Bonding Model" that can reasonably predict outcomes. Looks like they already hit a few speed bumps. As more 3-D "realistic" virtual molecular rendering engines get built by researchers... they'll need to compile a lot of examples to match up with Mathis' Charge Field theory. There may arise an opportunity to map their pics/findings back to Mathis' Atoms-Molecules/Nevyn's MBL. Basically prove a Mathis style model first (and cause a quantum revolution! ... jk.... ).
-----------
From face recognition to phase recognition: Neural network captures atomic-scale rearrangements
May 31, 2018, Brookhaven National Laboratory
From face recognition to phase recognition
Deciphering the changes in the 3-D structure of iron (center) upon heating, from top, clockwise: The in situ x-ray absorption experiment generates an extended x-ray absorption fine structure (EXAFS) spectrum that is fed into a neural network to extract the radial distribution function, unique for each material and atomic arrangement. Credit: Brookhaven National Laboratory
If you want to understand how a material changes from one atomic-level configuration to another, it's not enough to capture snapshots of before-and-after structures. It'd be better to track details of the transition as it happens. Same goes for studying catalysts, materials that speed up chemical reactions by bringing key ingredients together; the crucial action is often triggered by subtle atomic-scale shifts at intermediate stages.
"To understand the structure of these transitional states, we need tools to both measure and identify what happens during the transition," said Anatoly Frenkel, a physicist with a joint appointment at the U.S. Department of Energy's Brookhaven National Laboratory and Stony Brook University.
Frenkel and his collaborators have now developed such a "phase-recognition" tool—or more precisely, a way to extract "hidden" signatures of an unknown structure from measurements made by existing tools. In a paper just published in Physical Review Letters, they describe how they trained a neural network to recognize features in a material's X-ray absorption spectrum that are sensitive to the arrangement of atoms at a very fine scale. The method helped reveal details of the atomic-scale rearrangements iron undergoes during an important but poorly understood phase change.
"This network training is similar to how machine learning is used in facial-recognition technology," Frenkel explained. In that technology, computers analyze thousands of images of faces and learn to recognize key features, or descriptors, and the differences that tell individuals apart. "There is a correlation between some features of the data," Frenkel explained. "In the language of our X-ray data, the correlations exist between the intensity of different regions of the spectra that also have direct relevance to the underlying structure and the corresponding phase."
Network training
To get the neural network ready for "phase recognition"—that is, to be able to recognize the key spectral features—the scientists needed a training set of images.
Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper, tackled that challenge. First, he used molecular dynamic simulations to create 3000 realistic structure models corresponding to different phases of iron and different degrees of disorder.
"In these models, we wanted to account for the dynamic effects, so we define the forces that act between different atoms and we allow the atoms to move around as influenced by these forces," Timoshenko said. Then, using well-established approaches, he used mathematical calculations to derive the X-ray absorption spectra that would be obtained from each of these 3000 structures.
"It's not a problem to simulate a spectrum," Timoshenko said, "it's a problem to understand them in the backwards direction—start with the spectrum to get to the structure—which is why we need the neural network!".
------------
(more at link... https://phys.org/news/2018-05-recognition-phase-neural-network-captures.html)
pubs.acs.org/doi/suppl/10.1021/acs.jpclett.7b02364/suppl_file/jz7b02364_si_001.pdf
------------
Scientists Use Machine Learning to Translate 'Hidden' Information that Reveals Chemistry in Action
New method allows on-the-fly analysis of how catalysts change during reactions, providing crucial information for improving performance
October 10, 2017
https://www.bnl.gov/newsroom/news.php?a=112547
A sketch of the new method that enables fast, "on-the-fly" determination of three-dimensional structure of nanocatalysts. The neural network converts the x-ray absorption spectra into geometric information (such as nanoparticle sizes and shapes) and the structural models are obtained for each spectrum.
UPTON, NY—Chemistry is a complex dance of atoms. Subtle shifts in position and shuffles of electrons break and remake chemical bonds as participants change partners. Catalysts are like molecular matchmakers that make it easier for sometimes-reluctant partners to interact.
Now scientists have a way to capture the details of chemistry choreography as it happens. The method—which relies on computers that have learned to recognize hidden signs of the steps—should help them improve the performance of catalysts to drive reactions toward desired products faster.
The method—developed by an interdisciplinary team of chemists, computational scientists, and physicists at the U.S. Department of Energy’s Brookhaven National Laboratory and Stony Brook University—is described in a new paper published in the Journal of Physical Chemistry Letters. The paper demonstrates how the team used neural networks and machine learning to teach computers to decode previously inaccessible information from x-ray data, and then used that data to decipher 3D nanoscale structures.
Decoding nanoscale structures
“The main challenge in developing catalysts is knowing how they work—so we can design better ones rationally, not by trial-and-error,” said Anatoly Frenkel, leader of the research team who has a joint appointment with Brookhaven Lab’s Chemistry Division and Stony Brook University’s Materials Science Department. “The explanation for how catalysts work is at the level of atoms and very precise measurements of distances between them, which can change as they react. Therefore it is not so important to know the catalysts’ architecture when they are made but more important to follow that as they react.”
...
Trouble is, important reactions—those that create important industrial chemicals such as fertilizers—often take place at high temperatures and under pressure, which complicates measurement techniques. For example, x-rays can reveal some atomic-level structures by causing atoms that absorb their energy to emit electronic waves. As those waves interact with nearby atoms, they reveal their positions in a way that’s similar to how distortions in ripples on the surface of a pond can reveal the presence of rocks. But the ripple pattern gets more complicated and smeared when high heat and pressure introduce disorder into the structure, thus blurring the information the waves can reveal.
(Charge Field flows??? )
So instead of relying on the “ripple pattern” of the x-ray absorption spectrum, Frenkel’s group figured out a way to look into a different part of the spectrum associated with low-energy waves that are less affected by heat and disorder.
“We realized that this part of the x-ray absorption signal contains all the needed information about the environment around the absorbing atoms,” said Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper. “But this information is hidden ‘below the surface’ in the sense that we don’t have an equation to describe it, so it is much harder to interpret. We needed to decode that spectrum but we didn’t have a key.”
-------
-----------
From face recognition to phase recognition: Neural network captures atomic-scale rearrangements
May 31, 2018, Brookhaven National Laboratory
From face recognition to phase recognition
Deciphering the changes in the 3-D structure of iron (center) upon heating, from top, clockwise: The in situ x-ray absorption experiment generates an extended x-ray absorption fine structure (EXAFS) spectrum that is fed into a neural network to extract the radial distribution function, unique for each material and atomic arrangement. Credit: Brookhaven National Laboratory
If you want to understand how a material changes from one atomic-level configuration to another, it's not enough to capture snapshots of before-and-after structures. It'd be better to track details of the transition as it happens. Same goes for studying catalysts, materials that speed up chemical reactions by bringing key ingredients together; the crucial action is often triggered by subtle atomic-scale shifts at intermediate stages.
"To understand the structure of these transitional states, we need tools to both measure and identify what happens during the transition," said Anatoly Frenkel, a physicist with a joint appointment at the U.S. Department of Energy's Brookhaven National Laboratory and Stony Brook University.
Frenkel and his collaborators have now developed such a "phase-recognition" tool—or more precisely, a way to extract "hidden" signatures of an unknown structure from measurements made by existing tools. In a paper just published in Physical Review Letters, they describe how they trained a neural network to recognize features in a material's X-ray absorption spectrum that are sensitive to the arrangement of atoms at a very fine scale. The method helped reveal details of the atomic-scale rearrangements iron undergoes during an important but poorly understood phase change.
"This network training is similar to how machine learning is used in facial-recognition technology," Frenkel explained. In that technology, computers analyze thousands of images of faces and learn to recognize key features, or descriptors, and the differences that tell individuals apart. "There is a correlation between some features of the data," Frenkel explained. "In the language of our X-ray data, the correlations exist between the intensity of different regions of the spectra that also have direct relevance to the underlying structure and the corresponding phase."
Network training
To get the neural network ready for "phase recognition"—that is, to be able to recognize the key spectral features—the scientists needed a training set of images.
Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper, tackled that challenge. First, he used molecular dynamic simulations to create 3000 realistic structure models corresponding to different phases of iron and different degrees of disorder.
"In these models, we wanted to account for the dynamic effects, so we define the forces that act between different atoms and we allow the atoms to move around as influenced by these forces," Timoshenko said. Then, using well-established approaches, he used mathematical calculations to derive the X-ray absorption spectra that would be obtained from each of these 3000 structures.
"It's not a problem to simulate a spectrum," Timoshenko said, "it's a problem to understand them in the backwards direction—start with the spectrum to get to the structure—which is why we need the neural network!".
------------
(more at link... https://phys.org/news/2018-05-recognition-phase-neural-network-captures.html)
pubs.acs.org/doi/suppl/10.1021/acs.jpclett.7b02364/suppl_file/jz7b02364_si_001.pdf
------------
Scientists Use Machine Learning to Translate 'Hidden' Information that Reveals Chemistry in Action
New method allows on-the-fly analysis of how catalysts change during reactions, providing crucial information for improving performance
October 10, 2017
https://www.bnl.gov/newsroom/news.php?a=112547
A sketch of the new method that enables fast, "on-the-fly" determination of three-dimensional structure of nanocatalysts. The neural network converts the x-ray absorption spectra into geometric information (such as nanoparticle sizes and shapes) and the structural models are obtained for each spectrum.
UPTON, NY—Chemistry is a complex dance of atoms. Subtle shifts in position and shuffles of electrons break and remake chemical bonds as participants change partners. Catalysts are like molecular matchmakers that make it easier for sometimes-reluctant partners to interact.
Now scientists have a way to capture the details of chemistry choreography as it happens. The method—which relies on computers that have learned to recognize hidden signs of the steps—should help them improve the performance of catalysts to drive reactions toward desired products faster.
The method—developed by an interdisciplinary team of chemists, computational scientists, and physicists at the U.S. Department of Energy’s Brookhaven National Laboratory and Stony Brook University—is described in a new paper published in the Journal of Physical Chemistry Letters. The paper demonstrates how the team used neural networks and machine learning to teach computers to decode previously inaccessible information from x-ray data, and then used that data to decipher 3D nanoscale structures.
Decoding nanoscale structures
“The main challenge in developing catalysts is knowing how they work—so we can design better ones rationally, not by trial-and-error,” said Anatoly Frenkel, leader of the research team who has a joint appointment with Brookhaven Lab’s Chemistry Division and Stony Brook University’s Materials Science Department. “The explanation for how catalysts work is at the level of atoms and very precise measurements of distances between them, which can change as they react. Therefore it is not so important to know the catalysts’ architecture when they are made but more important to follow that as they react.”
...
Trouble is, important reactions—those that create important industrial chemicals such as fertilizers—often take place at high temperatures and under pressure, which complicates measurement techniques. For example, x-rays can reveal some atomic-level structures by causing atoms that absorb their energy to emit electronic waves. As those waves interact with nearby atoms, they reveal their positions in a way that’s similar to how distortions in ripples on the surface of a pond can reveal the presence of rocks. But the ripple pattern gets more complicated and smeared when high heat and pressure introduce disorder into the structure, thus blurring the information the waves can reveal.
(Charge Field flows??? )
So instead of relying on the “ripple pattern” of the x-ray absorption spectrum, Frenkel’s group figured out a way to look into a different part of the spectrum associated with low-energy waves that are less affected by heat and disorder.
“We realized that this part of the x-ray absorption signal contains all the needed information about the environment around the absorbing atoms,” said Janis Timoshenko, a postdoctoral fellow working with Frenkel at Stony Brook and lead author on the paper. “But this information is hidden ‘below the surface’ in the sense that we don’t have an equation to describe it, so it is much harder to interpret. We needed to decode that spectrum but we didn’t have a key.”
-------
Last edited by Cr6 on Sat Jun 02, 2018 2:55 am; edited 1 time in total
Re: Javacript Library: Keras.js
Machine Learning - American Chemical Society
https://cdn-pubs.acs.org/doi/pdfplus/10.1021/acs.jpclett.8b00009
https://pubs.acs.org/doi/pdf/10.1021/acs.jpcc.8b00036
Cite This: J. Phys. Chem. Lett. 2018, 9, 569−569
Machine Learning structures needed to be computed to arrive at a DFT-based free energy diagram. This list and partitioning are by no means complete or unique. However, we hope this selection provides a representative snapshot of the field at this point. We expect machine learning and related artificial intelligent algorithms to play an increasingly prominent role in physical chemistry research and in the pages of The Journal of Physical Chemistry.
The recent complete rout of top Go players by a deep learning neural network self-learned by AlphaGo has taken the world by storm. The neural network is one example of a larger class of machine learning algorithms and corresponding conceptual models that have have penetrated many fields, including our own of physical chemistry. In this Virtual Issue, we collect together 25 examples of machine learning applications that have appeared in The Journal of Physical Chemistry in 2016 and 2017. The problems addressed and approaches used are diverse, and in an effort to highlight common themes, we have organized these 25 into five separate classes. Though this partitioning is not unique (the optimal partitioning is a machine learning problem in its own right!), we hope it is helpful especially to those new to the area. The first set of eight papers are representative of models that map some readily observable descriptor or descriptors of a material to a physical property of interest, in the spirit of quantitative structure−property relationships (QSPR) that have long been of interest. For example, Yao et al. (http://dx.doi. org/10.1021/acs.jpclett.7b01072) reports a model in which the bonds in a molecule (the human-inferable descriptors) are mapped to a total energy (the property), and Jinnouchi et al. (http://pubs.acs.org/doi/10.1021/acs.jpclett.7b02010) an ambitious approach to relate particle size and composition to catalytic activity. In the second set of two papers, an optimal material composition is identified by relating a feature descriptor, or fingerprint, to a property of interest. For example, Kim et al. (http://dx.doi.org/10.1021/acs.jpcc. 6b05068) relate band gap and phonon frequency descriptors to dielectric breakdown strength and, by searching through these descriptors, identify novel perovskite compositions with high breakdown strength. In a third (and highly popular in this journal!) class, the potential energy surface of a system is represented in terms of fingerprints of the local environment around each atom. Representative of this diverse class is the work of Boes et al. (https://doi.org/10.1021/acs.jpcc. 6b12752), who develop a neural network representation of the AuPd alloy and use it to predict composition and temperature-dependent surface segregation through Monte Carlo models. In the same spirit but very different context, Kolb et al. (http://dx.doi.org/10.1021/acs.jpca.7b01182) describe a Gaussian Process approach for representing the potential energy surface useful for reactive scattering calculations, and Botu et al. make a case for representing forces rather than energies with neural nets. In the fourth class, machine learning techniques are used to tease out information from either experimental or computational data. For example, Timoshenko et al. (http://pubs.acs.org/doi/10.1021/acs. jpclett.7b02364) use neural network analysis to relate observed X-ray absorption near edge spectra (XANES) to nanoparticle structure and composition. Lastly are examples of machine learning tools “in the loop” to accelerate the development of computational models. Representative of this class is the work of Ulissi et al. (https://doi.org/10.1021/acs.jpclett.6b01254), who use a machine learning approach to reduce the number ..
2018 American Chemical Society
https://cdn-pubs.acs.org/doi/pdfplus/10.1021/acs.jpclett.8b00009
https://pubs.acs.org/doi/pdf/10.1021/acs.jpcc.8b00036
Cite This: J. Phys. Chem. Lett. 2018, 9, 569−569
Machine Learning structures needed to be computed to arrive at a DFT-based free energy diagram. This list and partitioning are by no means complete or unique. However, we hope this selection provides a representative snapshot of the field at this point. We expect machine learning and related artificial intelligent algorithms to play an increasingly prominent role in physical chemistry research and in the pages of The Journal of Physical Chemistry.
The recent complete rout of top Go players by a deep learning neural network self-learned by AlphaGo has taken the world by storm. The neural network is one example of a larger class of machine learning algorithms and corresponding conceptual models that have have penetrated many fields, including our own of physical chemistry. In this Virtual Issue, we collect together 25 examples of machine learning applications that have appeared in The Journal of Physical Chemistry in 2016 and 2017. The problems addressed and approaches used are diverse, and in an effort to highlight common themes, we have organized these 25 into five separate classes. Though this partitioning is not unique (the optimal partitioning is a machine learning problem in its own right!), we hope it is helpful especially to those new to the area. The first set of eight papers are representative of models that map some readily observable descriptor or descriptors of a material to a physical property of interest, in the spirit of quantitative structure−property relationships (QSPR) that have long been of interest. For example, Yao et al. (http://dx.doi. org/10.1021/acs.jpclett.7b01072) reports a model in which the bonds in a molecule (the human-inferable descriptors) are mapped to a total energy (the property), and Jinnouchi et al. (http://pubs.acs.org/doi/10.1021/acs.jpclett.7b02010) an ambitious approach to relate particle size and composition to catalytic activity. In the second set of two papers, an optimal material composition is identified by relating a feature descriptor, or fingerprint, to a property of interest. For example, Kim et al. (http://dx.doi.org/10.1021/acs.jpcc. 6b05068) relate band gap and phonon frequency descriptors to dielectric breakdown strength and, by searching through these descriptors, identify novel perovskite compositions with high breakdown strength. In a third (and highly popular in this journal!) class, the potential energy surface of a system is represented in terms of fingerprints of the local environment around each atom. Representative of this diverse class is the work of Boes et al. (https://doi.org/10.1021/acs.jpcc. 6b12752), who develop a neural network representation of the AuPd alloy and use it to predict composition and temperature-dependent surface segregation through Monte Carlo models. In the same spirit but very different context, Kolb et al. (http://dx.doi.org/10.1021/acs.jpca.7b01182) describe a Gaussian Process approach for representing the potential energy surface useful for reactive scattering calculations, and Botu et al. make a case for representing forces rather than energies with neural nets. In the fourth class, machine learning techniques are used to tease out information from either experimental or computational data. For example, Timoshenko et al. (http://pubs.acs.org/doi/10.1021/acs. jpclett.7b02364) use neural network analysis to relate observed X-ray absorption near edge spectra (XANES) to nanoparticle structure and composition. Lastly are examples of machine learning tools “in the loop” to accelerate the development of computational models. Representative of this class is the work of Ulissi et al. (https://doi.org/10.1021/acs.jpclett.6b01254), who use a machine learning approach to reduce the number ..
2018 American Chemical Society
Similar topics
» Creating a "Mathis" R library?
» Cancer and ATP: The Photon Energy Pathway (DCA as anti-tumor)
» JavaScript Library for YAEL (Your Advanced Electrode Localizer)
» Cancer and ATP: The Photon Energy Pathway (DCA as anti-tumor)
» JavaScript Library for YAEL (Your Advanced Electrode Localizer)
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum