Open access peer-reviewed chapter - ONLINE FIRST

Artificial Neural Network-Based Approach for Surface Energy Prediction

Written By

Fuming Lai and Shengfu Tong

Submitted: 13 May 2024 Reviewed: 25 June 2024 Published: 22 July 2024

DOI: 10.5772/intechopen.1006093

Recent Advances in Neuromorphic Computing IntechOpen
Recent Advances in Neuromorphic Computing Edited by Kang Jun Bai

From the Edited Volume

Recent Advances in Neuromorphic Computing [Working Title]

Dr. Kang Jun Bai and Prof. Yang (Cindy) Yi

Chapter metrics overview

18 Chapter Downloads

View Full Metrics

Abstract

This chapter explores the utilization of artificial neural network (ANN) models in predicting surface energy values. ANN models are a type of machine learning (ML) algorithm inspired by the way the human brain processes information. The chapter delves into the theoretical foundations of ANN models and their application in modeling surface energy, a crucial parameter in various scientific and industrial processes. By training the ANN models with relevant datasets, researchers can develop a predictive model capable of estimating surface energy values with high accuracy. The chapter discusses the methodology, challenges, and potential benefits of using an ANN-based approach for surface energy prediction, offering insights into the intersection of artificial intelligence and materials science.

Keywords

  • ANN models
  • surface energy
  • predictive model
  • machine learning
  • material science

1. Introduction

The surface energy of a crystal, which is the excess energy at the surface compared to the bulk due to unbalanced interatomic forces, is a critical concept in materials science with wide-ranging applications [1, 2, 3, 4]. It is fundamental to the processes of nucleation and crystal growth, influences the adhesion and wetting properties of materials, and is directly related to surface tension. Surface energy is also essential in the design of catalysts, where it affects reaction rates by altering the energy landscape at the reaction sites [5, 6]. Furthermore, it plays a role in enhancing the mechanical properties of materials through grain boundary formation and is crucial in the synthesis of nanoparticles with specific properties for various applications [7]. Surface energy calculations play a pivotal role in a multitude of applications across various scientific disciplines, providing critical insights into the behavior and properties of materials at the atomic level [8, 9]. Calculating the surface energy of a crystal is a complex and important task, as it involves the precise assessment of the unbalanced interatomic forces at the crystal surface [10, 11, 12]. The computation of surface energy must take into account not only the influence of atomic distances, temperature, and pressure but also the diversity of surface structures, such as terraces, kinks, and vacancies. Additionally, surface atoms may undergo reconstruction to form structures that are distinct from the bulk phase, as well as the impact of adsorbed atoms or molecules on the surface energy.

The evolution of surface energy calculation methods has mirrored the depth and breadth of research in this field, transitioning from early semiempirical models to contemporary first-principles calculations [13]. Each method has found its niche and limitations within the scope of material systems it can accurately represent. In the nascent stages of surface energy studies, researchers relied heavily on experimental data coupled with semiempirical models to estimate surface properties. The Embedded Atom Method (EAM), developed by Baskes, was a significant advancement that provided a quantum mechanical framework for predicting the properties of metals, including surface energy [14]. This method was particularly successful in capturing the many-body effects inherent in metallic bonding. Parallel to EAM, the Empirical Electron Surface Model (EESM) was utilized to calculate surface energies by considering the electron density associated with dangling bonds at the surface [15]. These semiempirical approaches were instrumental in laying the foundational understanding of surface phenomena in metals and alloys. However, the transformative shift in surface energy calculations came with the advent of Density Functional Theory (DFT), which emerged as a powerful tool for first-principles calculations [16]. DFT, as implemented in computational packages like VASP [17], allowed for the accurate determination of electronic structures and, consequently, surface energies. The method’s ability to handle complex interactions in multicomponent systems and its versatility in studying a wide range of materials made it the de facto standard for surface energy calculations. DFT calculations have been extensively used to study the surface energy of various materials, including metals, semiconductors, and insulators. For instance, Lee et al. [18] and Vitos et al. [19] used DFT to establish a comprehensive database of surface energies for low-index surfaces of 60 metals, which has since been a valuable resource for surface science research. Despite the successes of DFT, the method also faces challenges, particularly when dealing with large systems or when high accuracy is required. To address these issues, researchers have developed various approximations and corrections to the exchange-correlation function, such as the Generalized Gradient Approximation (GGA), which improves the accuracy of DFT calculations [20]. Moreover, the computational cost of DFT calculations has led to the development of alternative methods, such as the GW approximation [21] for quasiparticle energies and time-dependent DFT (TDDFT) [22, 23] for excited states, which provide complementary information to standard DFT calculations. The development of high-throughput computational methods has further expanded the scope of surface energy calculations. These approaches, such as the one developed by Yalcin and Wolloch [24], allow for the rapid assessment of surface energies across a multitude of materials and crystal orientations, facilitating the discovery of new materials with desirable surface properties.

The development from semiempirical models to first-principles calculations has been marked by significant milestones that have progressively enhanced our ability to predict and understand surface energies [25, 26]. As computational resources continue to grow and theoretical models are refined, the accuracy and predictive power of surface energy calculations will undoubtedly continue to improve, enabling the design of materials with tailored surface properties for a wide array of applications. Accurate prediction of surface energy is crucial for understanding and designing new materials. However, calculating surface energy in crystals involves several challenges across theoretical, computational, and experimental domains. Theoretical models often simplify crystal structures and assume ideal conditions, potentially compromising accuracy. Computational methods like DFT can be computationally intensive and rely on approximations that may not capture complex interactions accurately. Experimental techniques such as contact angle measurements are sensitive to sample preparation and environmental conditions, which can introduce variability and affect measurement precision [27, 28]. Surface reconstruction in crystals alters surface atomic arrangements, complicating accurate energy determination. Temperature and pressure variations influence surface energy, requiring careful consideration in calculations. Nanoscale effects can lead to size-dependent deviations from bulk values, challenging accurate prediction. Material-specific considerations and limited data availability for some materials further constrain the reliability of calculated surface energies, underscoring the need for rigorous validation and complementary approaches to enhance accuracy in surface energy determination. In addition, surface energy values obtained using DFT and similar computational methods are typically derived under idealized vacuum conditions [29, 30]. In such situations, theoretical models neglect the presence of other molecules, solvents, or impurities that may exist in the environment and could influence surface energy. Consequently, computed surface energy values may deviate from those measured experimentally. Experimental measurements of surface energy are often conducted in liquids or gases, where molecular interactions and environmental effects significantly impact surface behavior. Therefore, theoretical calculations typically require experimental validation and correction to accurately reflect surface energy behavior under realistic conditions. Recently, ANN-based approaches have garnered attention due to their lower computational costs and satisfactory predictive performance. In the prediction of surface energy, ANN models typically receive specific characteristics of materials, such as atomic arrangement, chemical composition, and crystal structure, as inputs [31, 32]. Through training, they learn the relationships between these features and surface energy. ANN models are capable of handling complex nonlinear relationships and can provide highly accurate predictions for surface energy. Additionally, ANN models can process large volumes of data and offer rapid predictions, which aids in accelerating the discovery and design of materials.

In this chapter, we initially present an overview of the development and application of ANN models, with a particular focus on their advancements in predicting surface energies. Subsequently, we elaborate on the construction process of ANN predictive models and the assembly of datasets, which are crucial steps to ensure the models’ accuracy in surface energy predictions. We then evaluate and compare the performance of these ANN models to ascertain their precision and reliability in forecasting surface energies. The discussion also encompasses the limitations of current models and explores the potential impact of these limitations on the accuracy of surface energy predictions. Finally, we provide a prospective outlook on future research directions in surface energy prediction models. This includes a discussion on enhancing the predictive accuracy through algorithmic improvements and the expansion of dataset diversity, as well as the potential positive implications of these research efforts on material design and application.

Advertisement

2. Development and application of ANN models

The ANN models, drawing inspiration from biological neural networks, stand as a cornerstone in the realm of ML, offering a robust framework for diverse computational tasks [33]. As depicted in Figure 1, the orange neurons therein constitute a layer, denoted as the first layer of the network, through which input data are propagated. Similarly, the final layer is referred to as the output layer, as indicated by the blue color. The layers situated between the input and output layers are termed hidden layers. In this example, we have only one hidden layer, depicted in red. From fundamental Feedforward Neural Networks (FNN) [34, 35, 36, 37] to intricate architectures like Recurrent Neural Networks (RNN) [38, 39, 40], Convolutional Neural Networks (CNN) [41, 42], Generative Adversarial Networks (GAN) [43, 44], and Autoencoders [45], ANN models present a spectrum of capabilities tailored to specific data types and tasks. As the field of materials science increasingly relies on predictive modeling for understanding and anticipating material properties, the integration of neural network methodologies has become pivotal. Particularly in domains such as predicting surface energy and adsorption energy, the rapid evolution of ML and AI technologies has empowered neural network models to offer precise insights and forecasts.

Figure 1.

ANN architecture.

Understanding and predicting material properties are foundational pursuits within materials science, where the application of ML and artificial intelligence (AI) technologies, particularly neural network models, has emerged as transformative. ANN models, with their adaptability and capacity to discern intricate patterns within data, have become indispensable tools in this field. From elucidating fundamental relationships to enabling precise predictions of properties like surface energy and adsorption energy, ANN models have revolutionized the landscape of material characterization and design. This synergy between advanced computational techniques and materials science underscores the imperative of harnessing neural network methodologies for accelerating discoveries and innovations in material research. The core strength of neural network models lies in their ability to learn from vast amounts of data and extract useful features to predict material properties. Graph neural networks (GNNs) [46], for instance, are particularly suited for addressing materials science problems due to their capacity to process graph-structured data, which aligns well with the atomic structure representation of materials. GNNs can effectively predict electronic properties, surface chemical properties, and catalytic performance of materials by learning the graph representation of atomic structures (Figure 2) [47].

Figure 2.

Scheme of the GNN testing framework and workflow. Adapted with permission from Ref. [47]. Copyright 2021, Springer Nature Publishing Group.

In the domain of surface energy prediction, neural network models have achieved notable advancements. Surface energy, a pivotal parameter in material surface property characterization, influences the surface structure, relaxation, roughening, reconstruction, and the equilibrium shape of crystals. Traditional experimental techniques for measuring surface energy are subject to various limitations, including the complexity of experimental setups, measurement precision, and computational expenses. ML models, particularly Gaussian process regression (GPR), offer an efficient and cost-effective alternative for surface energy estimation. Zhang and Xu [48] developed a GPR model that predicts the surface energy of metals using physical parameters such as cohesive energy, lattice parameters, bulk modulus, and vacancy formation energy, enabling rapid estimation of surface energy for a variety of metals. When it comes to adsorption energy prediction, neural network models are equally instrumental. Adsorption energy is a critical metric for assessing catalyst activity and is vital for the discovery of novel catalysts. Li et al. [49] introduced a local environment pooling (LEPool) approach to enhance the predictive performance of GNNs for adsorption energy. By combining neural message passing with edge updates network (NMPEU) and DimeNet++, they achieved high accuracy in predicting CO and H adsorption energies on transition metal catalyst surfaces.

These studies demonstrate the effectiveness of ANN models in predicting material properties, yet they also highlight some limitations. For instance, GNN models typically require a substantial amount of training data, and their performance may decline with smaller datasets. Additionally, the generalization ability of these models is a critical issue that needs further research to improve their performance across different datasets. To overcome these challenges, researchers are exploring various methods to refine neural network models. One approach involves hyperparameter optimization to adjust model parameters and enhance performance on specific datasets. Another method is transferring learning, which leverages models trained on one task to improve learning efficiency on a new task. Moreover, researchers are also investigating ways to integrate neural network models with physical principles and domain knowledge to enhance the models’ interpretability and generalization capabilities. The integration of experimental and computational approaches is also key to advancing material property prediction. Combining experimental data with computational results can lead to more accurate property predictions and guide experimental design. For example, Lai and colleagues [31, 32] used ANN methods in conjunction with the Wulff construction method to predict the surface energies of TiO2 and Cu2O crystals and validated the model’s accuracy with experimental data.

The ANN utilized for machine learning-assisted facet design comprises an input layer, an output layer, and three hidden layers. To predict the surface energies of TiO2, the input layer consists of three neurons corresponding to normalized elemental fractions summing to one. The three hidden layers each contain four neurons, ensuring sufficient depth for feature extraction and representation. The output layer is configured to normalize and output surface energies, also comprising three neurons [31]. In the case of predicting surface energies for Cu2O crystals, the ANN model features three hidden layers with the following neuron configurations: the first hidden layer comprises 50 neurons, the second layer 30 neurons, and the third layer 20 neurons (Figure 3). This architecture enables comprehensive data processing and abstraction to effectively predict and analyze surface energies [32].

Figure 3.

(a) Morphological data generation using Wulff Construction for Cu2O with {100} and {111} facets; (b) Feature vectors and instance classes of training dataset; and (c) ANN model training process and configuration for Cu2O crystals. Adapted with permission from Ref. [32], Copyright 2023, MDPI Publications.

Advertisement

3. ANN prediction and dataset construction

The construction of databases is a pivotal step in understanding and predicting the properties of materials. These databases typically encompass a wealth of experimental data and theoretical calculations, serving as the foundation for ML models to train and validate their predictions. Initially, Palizhati et al. [8] focused on predicting the surface energy of inorganic crystals, particularly for complex intermetallic compounds. The challenge lay in the fact that existing predictive methods and datasets were predominantly geared toward monometallic crystals, with the surface properties of bimetallic or more intricate surfaces remaining largely unpredictable. To address this, the researchers developed a workflow that integrates high-throughput Density Functional Theory (DFT) calculations with an ML framework to predict the cleavage energies of alloys. This approach calculated the cleavage energy of 3033 intermetallic alloys across 36 elements and 47 space groups. The high-throughput nature of this workflow not only seeded a comprehensive database of cleavage energies but also facilitated the training of a Crystal Graph Convolutional Neural Network (CGCNN), as shown in Figure 4. The CGCNN model demonstrated accurate predictions of cleavage energy and qualitatively reproduced nanoparticle surface distributions, known as Wulff constructions. This workflow provided quantitative insights into uncharted chemical space, predicting which surfaces were relatively stable, thereby aiding in the selection of promising candidates for applications such as catalyst screening and nanomaterial synthesis.

Figure 4.

(a) DFT calculation of cleavage energies, and (b) Predictions via CGCNN models. Adapted with permission from Ref. [8]. Copyright 2019, ACS Publications.

In another study, Zhang and Xu [48] developed a Gaussian Process Regression (GPR) model to predict metal surface energy. The model was constructed based on several physical parameters, including cohesive energy, lattice parameters, bulk modulus, and vacancy formation energy, which are closely correlated with metal surface energy. The GPR model exhibited high accuracy and stability, offering a rapid, robust, and cost-effective tool for estimating the surface energy of various metals. The dataset included 43 metals with surface energies ranging from 0.10 to 3.68 Jm−2, encompassing metals with face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close-packed (HCP) structures. The GPR model outperformed previous methods such as Support Vector Regression (SVR) and the Stabilized Jellium Model (SJM), demonstrating improved predictive capabilities. Furthermore, Liu et al. [50] focused on constructing a multiscale, multisource dataset of hydroxyapatite nanoparticles (HANPs) with varying morphologies and sizes. The dataset included experimental data such as TEM/SEM, XRD/crystallinity, reactive oxygen species (ROS), antitumor effects, and zeta potential, as well as computational results from nanoparticles up to 9768 atoms in size, calculated using DFT, semiempirical DFTB, and force field methods (Figure 5). The researchers applied ML techniques to explore the correlation between surface energy and geometric features of HANPs. They also developed a graph convolutional neural network model to predict surface energy more accurately, without the need for predetermined features. The integration of image segmentation on experimental TEM images allowed for the reconstruction of 3D HANPs models, which were then used to simulate XRD patterns and crystallinity values that aligned well with experimental data.

Figure 5.

Datasets of HANPs generated at various theoretical levels: DFT, Density Functional Tight Binding (DFTB), and Consistent Valence Force Field (CVFF). Adapted with permission from Ref. [50]. Copyright 2021, Springer Nature Publishing Group.

Lai et al. [51] have undertaken a series of precise and systematic steps to construct databases of morphology and surface energy, which are crucial for understanding and predicting the crystalline shapes and surface characteristics of materials. This approach facilitated the acquisition of 1075 distinct facets across seven crystallographic systems, enabling the formulation of approximately 2251 potential morphologies and 92,033 diverse facet junctions (Figure 6) [31]. The dataset was further enriched by integrating surface energy and area data of these facets, derived from experimentally observed particle morphologies. To ensure the robustness of the ML model, a comprehensive training set was developed, employing an ANN that was rigorously trained and tested on a randomly distributed dataset comprising 1000 cases. The model’s architecture was optimized with a hyperbolic tangent sigmoid transfer function across three hidden layers, with the input layer consisting of normalized elemental fractions. The dataset used in the study is made accessible through an open access link, ensuring reproducibility and facilitating further research in the domain.

Figure 6.

The database of facet counts, morphologies, and junctions was constructed using the Wulff construction. Adapted with permission from Ref. [31] Copyright 2021, Wiley Publications.

Advertisement

4. Performance evaluation and comparison of ANN models

In the realm of ML, error metrics serve as pivotal instruments for quantifying the predictive performance of models. Here are several commonly used error measurement indicators:

MSE=1ni=1nyiŷi2E1
RMSE=1ni=1nyiŷi2E2
MAE=1ni=1nyiŷiE3
R2=1i=1nyiŷi2i=1nyiy¯2E4

where yi is the ith observed value, y¯ is the mean value of all observed values, ŷi denotes the corresponding predicted value, and n is the total number of observations. These metrics provide a direct measure of accuracy by calculating the discrepancies between a model’s predictions and the actual observed values. The selection of appropriate error metrics is contingent upon the characteristics of the data, the nature of the problem at hand, and the specific requirements of the researchers.

The mean squared error (MSE) and the root mean squared error (RMSE) are widely recognized error metrics. The MSE quantifies the average predictive bias of a model by computing the average of the squared differences between predicted and actual values. It assigns a greater weight to larger prediction errors, causing the MSE to increase significantly when the model exhibits substantial error for certain observations. This sensitivity makes the MSE an extremely responsive error metric, particularly suitable for applications with a low tolerance for large errors. However, a drawback of the MSE is that its units are squared, which can make the magnitude of the error less intuitive to interpret.

To address the MSE’s unit discrepancy, the RMSE was introduced. The RMSE is the square root of the MSE and shares the same units as the original data, making the error magnitude more tangible. The RMSE offers a balanced measure of error, taking into account the magnitude of the error without disproportionately penalizing outliers. The mean absolute error (MAE) is another commonly used metric that assesses a model’s predictive performance by calculating the average of the absolute differences between predictions and actual values. Unlike the MSE and RMSE, the MAE assigns equal weight to all errors, regardless of their magnitude. This characteristic renders the MAE more robust when dealing with data that includes outliers or follows a heavy-tailed distribution. Additionally, the MAE’s intuitive nature and ease of interpretation make it highly favored in practical applications. In the study by Palizhati et al. [8], the MAE was utilized to measure the discrepancy between the cleavage energies predicted by the CGCNN model and those obtained from DFT calculations, achieving a value of 0.0071 eV/Å2. This indicates that the CGCNN model is capable of predicting cleavage energies with a high degree of accuracy.

The R-squared (R2) is an additional error measurement that complements the MSE, RMSE, and MAE in assessing the performance of predictive models. R2 measures the proportion of the variance in the dependent variable that is predictable from the independent variables, and it is a common metric for assessing the fit of a model. R2 provides a statistical measure of how close the data are to the fitted regression line, and it is a powerful tool for understanding the goodness of fit of a model. In contrast to other metrics that exclusively emphasize the size of the discrepancies between predicted and actual outcomes, R2 also takes into account the variability of the data and the proportion of variance explained by the model.

In practical applications, the choice of the appropriate error metric requires a comprehensive consideration of various factors. While the MSE, RMSE, and MAE provide direct measures of prediction error, the R2 metric offers a nuanced view of the model’s ability to explain the response variable. It is a critical component in the comprehensive evaluation of model performance, especially when the goal is to understand the extent to which a model captures the complexity of the data. In the field of materials science, where the accurate prediction of surface energy is paramount, R2 serves as a key indicator of model utility, guiding researchers in the selection and refinement of predictive tools that can inform the design and analysis of novel materials. Lai et al. [31] utilized a dataset comprising 1000 cases, which were randomly divided into training and testing sets with an 8:2 ratio to evaluate the performance of the ANN model in predicting the surface energy of TiO2. The study found that the values predicted by the ANN model were highly congruent with those calculated using the inverse Wulff construction method, indicating a high degree of accuracy in the ML model’s ability to predict the surface energy of crystal facet junctions. The model demonstrated low MAE and RMSE, both below 0.02 and 0.03, respectively, and a high R2, above 0.95, which signifies a strong predictive performance and a reliable model for estimating surface energies in the context of TiO2 crystal facets (Figure 7(a–c)). In the prediction of Cu2O surface energy, the ANN model exhibited comparable MAE values between the training and testing sets, which ranged from 0.2 to 0.3. This indicates that the model has a low average absolute error in its predictions. Furthermore, the R2 values for both sets were between 0.96 and 0.98, signifying the model’s strong capacity to explain the variance in the target variable (Figure 7(d–f)). These findings suggest that the ANN model is effective in predicting material properties, offering automated feature learning and nonlinear modeling capabilities, which are crucial for the comprehensive understanding of Cu2O crystals and their surface behavior.

Figure 7.

(a) Evaluation of the ANN model’s performance in predicting TiO2 surface energy using a randomized 8:2 split of a 1000-case dataset; Comparison of surface energies for the: (a) {001}/{101}, and (b) {100}/{101} facet junctions calculated by inverse Wulff construction and predicted by ANN. Adapted with permission from Ref. [31] Copyright 2021, Wiley Publications. Surface energy predictions of the 50-facet polyhedron by the ANN model on both training and test sets: (d) γ111/γ100, (e) γ110/γ100, and (f) γ112/γ100. Adapted with permission from Ref. [32], Copyright 2023, MDPI Publications.

Advertisement

5. Conclusion

In summary, ANN models have shown great potential in predicting material properties, especially in the areas of surface energy and adsorption energy. As ML and AI technologies continue to advance, these models are expected to play an increasingly important role in material design and catalyst discovery. However, challenges remain, and future research directions should aim to address these issues to unlock the full potential of these models in material science. One significant challenge is the reliance on extensive training datasets, which can be costly and time-consuming to compile, particularly for less-studied materials. Additionally, the high computational demand associated with these models often necessitates powerful hardware, which may not be accessible to all research groups. Furthermore, the accuracy of surface energy predictions can be compromised by the models’ generalization capabilities, particularly when extrapolating to materials or conditions beyond the scope of the training data. This limitation is exacerbated by the inherent complexity of surface phenomena, which can involve subtle interactions that are not easily captured by current ML frameworks. The interpretability of these models is also a concern, as the ‘black box’ nature of many neural network architectures makes it challenging to understand the underlying reasons for their predictions, which is crucial for scientific validation and trust.

Future research in surface energy prediction models should aim to address these limitations and enhance the models’ predictive accuracy and reliability. There is a pressing need for the development of more efficient algorithms that can operate with reduced computational resources while maintaining or improving predictive performance. Innovations in transfer learning and few-shot learning could enable models to generalize better to new materials with limited data. Incorporating physical and chemical insights into the model development process may also improve the interpretability and reliability of predictions. Furthermore, the integration of multiscale modeling approaches that combine quantum mechanics, molecular dynamics, and ML could lead to a more nuanced understanding of surface phenomena. Finally, fostering open access to high-quality datasets and computational tools will be essential in democratizing the field and accelerating progress. The goal is to create models that are not only accurate but also transparent, efficient, and broadly applicable to a wide range of materials science problems.

References

  1. 1. Wu Z-Z et al. Identification of Cu(100)/Cu(111) interfaces as superior active sites for CO dimerization during CO2 Electroreduction. Journal of the American Chemical Society. 2021;144(1):259-269
  2. 2. Wang Y et al. DNA origami single crystals with Wulff shapes. Nature Communications. 2021;12(1):3011
  3. 3. Su H et al. Surface energy engineering of buried interface for highly stable perovskite solar cells with efficiency over 25%. Advanced Materials. 2024;36(2):2306724
  4. 4. Xue-Guang Chen LL, Huang G-Y, Chen X-M, Li X-Z, Zhou Y-K, Zou Y, et al. Optofluidic crystallithography for directed growth of single-crystalline halide perovskites. Nature Communications. 2024;15:3677
  5. 5. Zhang K et al. Surface energy mediated sulfur vacancy of ZnIn2S4 atomic layers for photocatalytic H2O2 production. Advanced Functional Materials. 2023;33(35):2302964
  6. 6. Li H, Jiao Y, Davey K, Qiao SZ. Data-driven machine learning for understanding surface structures of heterogeneous catalysts. Angewandte Chemie International Edition. 2023;62(9):e202216383
  7. 7. Zhu S, Xie K, Lin Q, Cao R, Qiu F. Experimental determination of surface energy for high-energy surface: A review. Advances in Colloid Interface Science. 2023;315:102905
  8. 8. Palizhati A, Zhong W, Tran K, Back S, Ulissi ZW. Toward predicting Intermetallics surface properties with high-throughput DFT and convolutional neural networks. Journal of Chemical Information and Modeling. 2019;59(11):4742-4749
  9. 9. Shrestha A, Gao X, Hicks JC, Paolucci C. Nanoparticle size effects on phase stability for molybdenum and tungsten carbides. Chemistry of Materials. 2021;33(12):4606-4620
  10. 10. Tran R et al. Surface energies of elemental crystals. Scientific Data. 2016;3:160080
  11. 11. Lai F, Chen Y, Guo H. Inverse Wulff construction for surface energies of coexisting and missing surfaces of crystal particles. Journal of Crystal Growth. 2019;508:1-7
  12. 12. Lai F, Chen Y, Guo H. Surface energies of non-centrosymmetric nanocrystals by the inverse Wulff construction method. Physical Chemistry Chemical Physics. 2019;21(30):16486-16496
  13. 13. Barmparis GD, Lodziana Z, Lopez N, Remediakis IN. Nanoparticle shapes by using Wulff constructions and first-principles calculations. Beilstein Journal of Nanotechnology. 2015;6:361-368
  14. 14. Baskes MI. Modified embedded-atom potentials for cubic materials and impurities. Physical Review B. 1992;46(5):2727-2742
  15. 15. Fu B, Liu W, Li Z. Surface energy calculation of alkali metals with the empirical electron surface model. Materials Chemistry and Physics. 2010;123(2–3):658-665
  16. 16. Kohn W, Becke AD, Parr RG. Density functional theory of electronic structure. Journal of Physical Chemistry. 1996;100:12974-12980
  17. 17. Kresse G, Furthmüller J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Physical Review B. 1996;54(16):11169-11186
  18. 18. Lee JY et al. The surface energy and stress of metals. Surface Science. 2018;674:51-68
  19. 19. Vitos L, Ruban AV, Skriver HL, Kollár J. The surface energy of metals. Surface Science. 1998;411(1–2):186-202
  20. 20. Perdew JP, Burke K, Ernzerhof M. Generalized gradient approximation made simple. Physical Review Letters. 1996;77(18):3865-3868
  21. 21. Garcı́a-González P, Godby RW. GW self-energy calculations for surfaces and interfaces. Computer Physics Communications. 2001;13(1):108-122
  22. 22. Burke K, Werschnik J, Gross EK. Time-dependent density functional theory: Past, present, and future. Journal of Chemical Physics. 2005;123(6):62206
  23. 23. Adamo C, Jacquemin D. The calculations of excited-state properties with time-dependent density functional theory. Chemical Society Reviews. 2013;42(3):845-856
  24. 24. Yalcin F, Wolloch M. SurfFlow: High-throughput surface energy calculations for arbitrary crystals. Computational Materials Science. 2024;234:112799
  25. 25. Żenkiewicz M. Methods for the calculation of surface free energy of solids. Journal Of Achievements In Materials and Manufacturing Engineering. 2007;24(1):137-145
  26. 26. Domińczuk J, Krawczuk A. Comparison of surface free energy calculation methods. Applied Mechanics and Materials. 2015;791:259-265
  27. 27. Kwok DY, Neumann AW. Contact angle measurement and contact angle interpretation. Advances in Colloid and Interface Science. 1999;81:167-249
  28. 28. Hoyt JJ, Asta M, Karma A. Method for computing the anisotropy of the solid-liquid interfacial free energy. Physical Review Letters. 2001;86(24):5530-5533
  29. 29. Ferrer MM, Gouveia AF, Gracia L, Longo E, Andrés J. A 3D platform for the morphology modulation of materials: First principles calculations on the thermodynamic stability and surface structure of metal oxides: Co3O4,α-Fe2O3, and In2O3. Modelling and Simulation in Materials Science and Engineering. 2016;24(2):025007
  30. 30. Lai F, Luo R, Xie Y, Chen Y, Guo H. Modeling thermodynamic stability of morphologies and surfaces of YF3. Surface Science. 2020;700:121674
  31. 31. Lai F, Sun Z, Saji SE, He Y, Yu X, Zhao H, et al. Machine learning-aided crystal facet rational design with ionic liquid controllable synthesis. Small. 2021;17(12):2100024
  32. 32. Shi Y, Wang M, Zhou Z, Zhao M, Hu Y, Yang J, et al. Artificial neural network-based prediction and morphological evolution of Cu2O crystal surface energy. Coatings. 2023;13(9):1609
  33. 33. Abiodun OI, Jantan A, Omolara AE, Dada KV, Mohamed NA, Arshad H. State-of-the-art in artificial neural network applications: A survey. Heliyon. 2018;4(11):13605-13635
  34. 34. Eldan R, Shamir O. The power of depth for feedforward neural networks. In: 29th Annual Conference on Learning Theory. Vol. 49. New York: PMLR; 2016. pp. 907-940
  35. 35. Bebis G, Georgiopoulos M. Feed-forward neural networks. IEEE Potentials. 1994;13(4):27-31
  36. 36. Sazlı MH. A brief review of feed-forward neural networks. Communications Faculty of Sciences University of Ankara Series A2-A3 Physical Sciences Engineering. 2006;50(1):11-17
  37. 37. Svozil D, Kvasnicka V, Pospichal J. Introduction to multi-layer feed-forward neural networks. Chemometrics and Intelligent Laboratory Systems. 1997;39(1):43-62
  38. 38. Schuster M, Paliwal KK. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing. 1997;45(11):2673-2681
  39. 39. Medsker LR, Jain L. Recurrent neural networks. Design and Applications. 2001;5(64–67):2
  40. 40. Pascanu R, Mikolov T, Bengio Y. On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning. Atlanta: PMLR; 2013. pp. 1310-1318
  41. 41. Gu J et al. Recent advances in convolutional neural networks. Pattern Recognition. 2018;77:354-377
  42. 42. Li Z, Liu F, Yang W, Peng S, Zhou J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks Learning Systems. 2021;33(12):6999-7019
  43. 43. Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative adversarial networks: An overview. IEEE Signal Processing Magazine. 2018;35(1):53-65
  44. 44. Goodfellow I et al. Generative adversarial networks. Communications of the ACM. 2020;63(11):139-144
  45. 45. Baldi P. Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning. Washington: JMLR Workshop and Conference Proceedings; 2012. pp. 37-49
  46. 46. Reiser P et al. Graph neural networks for materials science and chemistry. Communications Materials. 2022;3(1):93
  47. 47. Fung V, Zhang J, Juarez E, Sumpter BG. Benchmarking graph neural networks for materials chemistry. npj Computational Materials. 2021;7(1):84
  48. 48. Zhang Y, Xu X. Machine learning modeling of metal surface energy. Materials Chemistry and Physics. 2021;267:124622
  49. 49. Li X, Chiong R, Hu Z, Page AJ. A graph neural network model with local environment pooling for predicting adsorption energies. Computational and Theoretical Chemistry. 2023;1226:114161
  50. 50. Liu Z et al. Machine learning on properties of multiscale multisource hydroxyapatite nanoparticles datasets with different morphologies and sizes. npj Computational Materials. 2021;7(1):142
  51. 51. Lai F et al. Toward predicting surface energy of rutile TiO2 with machine learning. CrystEngComm. 2023;25(2):199-205

Written By

Fuming Lai and Shengfu Tong

Submitted: 13 May 2024 Reviewed: 25 June 2024 Published: 22 July 2024