Open access peer-reviewed chapter - ONLINE FIRST

Digital Twins and the Mining Industry

Written By

Milad Ghahramanieisalou and Javad Sattarvand

Submitted: 21 January 2024 Reviewed: 23 February 2024 Published: 03 June 2024

DOI: 10.5772/intechopen.1005162

Technologies in Mining IntechOpen
Technologies in Mining Edited by Abhay Soni

From the Edited Volume

Technologies in Mining [Working Title]

Dr. Abhay Soni

Chapter metrics overview

34 Chapter Downloads

View Full Metrics

Abstract

Safe and efficient assimilation of new technologies into current operations in the mining industry requires adapting to new challenges. Traditional mining techniques and operations will inevitably be adjusted to incorporate new methods and machinery. Various industries, from manufacturing and engineering to social sciences, have embraced the Digital Twins (DT) methodology to study complex systems. The benefits of DT, encompassing features like a data hub, simulation and analysis tools, and visualization platforms, are substantial because they replicate their physical counterparts even before their existence. Once the physical twin is constructed, the DT serves as a digital mirror, aiding in ongoing monitoring, improvement, and control. Digital Twins utilize data-driven and physics-based models and advanced analytics to optimize cost, environmental emissions, and resource usage in developing extraction, production, processing, refining, manufacturing, or recycling technologies. They also enable precise control, predictive maintenance, and identification of potential bottlenecks or inefficiencies through simulation, monitoring, and analysis of every step in the supply chain. Utilizing digital twins expedites the development of novel technologies, ensuring their sustainability and competitiveness. Moreover, digital twins could play a role in diversifying commercially viable and environmentally sustainable sources of critical materials, including their recovery from waste streams.

Keywords

  • mining industry
  • digital twins
  • simulation
  • cost savings
  • sustainability
  • critical materials

1. Introduction

The Digital Twins (DT) methodology has garnered significant attention in recent years, finding applications across various industries. DT offers a profound understanding of complex systems, making it especially relevant in the context of emerging technologies in the mining industry. The assimilation of these technologies while ensuring safety and the potential changes they bring to mining techniques necessitates adjustments in current operations. Predictive simulations play a pivotal role in comprehending unforeseen scenarios and facilitating cost-effective modifications, transitioning them from the operational phase to the design phase.

The value of digital twins in business lies in their capacity to provide a comprehensive understanding of processes, facilitate optimal decision-making, and monitor practical actions. Digital Twins offer insights into both real-time and simulated behavior of physical systems, leading to quicker decision-making. As a unified hub for various data formats and sources, DT integrates valuable information to enhance awareness based on internal and external events necessitating appropriate actions. These factors include people (competitors, customers, legislators, or suppliers & supply chain), equipment breakdowns, process failures, and weather events.

DT is a dynamic concept that relies on data availability, resources, and managerial decisions [1, 2]. Notably, Negri et al. [3] contend that DT represents an evolved version of the Virtual Factory (VF). The key distinction lies in DT’s ability to incorporate realistic synchronization, leading to informed decisions regarding current and future production. The data model for DT must encompass system operations, historical data, behavioral patterns, and current states.

Many researchers view DT as a tool for maintenance, diagnostics, and prognostics (health analysis). The ultimate goal is to create a model that faithfully represents the complex system’s behavior, accounting for external factors, human interactions, and design constraints. Improved situational awareness by DT is based on real-time or near real-time data flow from diverse sources, such as web services, information from IoT devices, or smart device machine-borne data. The fundamental concept of DT involves understanding the current situation, projecting the future, and acting accordingly. Augmented by predictive models and analytics, DT becomes prescriptive, allowing for comprehensive decision support and autonomous actions [4].

Embracing DT for “real-time geometry assurance” in manufacturing product design, Söderberg et al. [5] propose its utility in predicting undesirable outcomes and establishing a tangible link to the physical system. This connection, maintained throughout the lifecycle through sensors and actuators, enables managers to monitor the machine’s current state, adjust settings, and reduce downtimes.

DT can potentially optimize mining operations across four key stages: Design, Pre-operation, Operation, and Rehabilitation. Historical data is harnessed for simulations during the design phase to identify system deficiencies and environmental interactions. In the pre-operation stage, DT aids in planning an effective monitoring scheme, determining data acquisition points, and specifying real-time or near real-time data requirements [6, 7].

The actual benefits of DT become increasingly evident during the operation stage. Continuously updated with precise data, DT projects the physical system’s status and serves as a simulation environment for exploring potential future states of the mine [8, 9].

One of DT’s significant strengths lies in integrating readings from all internal and external sensors and modeling the system and its surrounding environment. This capability proves critical in the final stage, rehabilitation, where environmental concerns in the mining industry are rising. DT provides a comprehensive history of mining operations and a platform for evaluating various rehabilitation scenarios [10, 11].

The emergence of DT hinges on the real-time synchronization of field data facilitated by the Internet of Things (IoT). IoT encompasses electronics, software, sensors, and network connectivity that gather and transform data. These “things,” or devices, are intelligent, embedded, networked systems within production facilities. Developing information models (structures ensuring digital data continuity) is crucial for harnessing IoT to its fullest potential [3, 12].

IoT devices facilitate the transfer of information from the real world to the virtual realm (DT). If employed for operational control, DT can return the results of any analysis and simulations on the received data to human decision-makers or employ AI to issue control orders [13, 14, 15, 16].

The business impact of DT spans increased revenue, reduced costs, improved customer and employee experience, compliance with rules, and risk reduction. DT acts as a catalyst for digitalization projects, initially aiming to improve efficiency but later evolving into monetizing new digital assets. From a societal standpoint, DT contributes to cost and time savings, job creation, new skills development, reduced traffic congestion, and a safer working environment. Environmentally, it leads to lower CO2 emissions and improved monitoring of wastes and tailings [4].

DT also changes business processes and technological advancements, with initiatives like Asset Administrative Shell (AAS) and Digital Twins Definition Language (DTDL) focusing on DT’s technical definition and instantiation [4]. These are essential initiatives since the existing literature offers various (sometimes contradictory) definitions of digital twins. DT’s definition and purpose depend significantly on the specific applications and vary from case to case. However, after thoroughly examining the existing academic and industrial documents, few DT definitions were found to best apply to the mining industry (Table 1).

DefinitionRef.
Digital twins (Product Lifecycle Management (PLM), Mirrored Spaces Models, and Information Mirroring Models) are defined as a set of virtual information constructs that fully describe a potential or actual physical manufactured product from the micro (atomic level) to the macro (geometrical level).[17]
An integrated multi-physics, multi-scale, probabilistic simulation of a vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its flying twin. It is ultra-realistic and may consider one or more critical and interdependent vehicle systems.[3]
A coupled model of the actual machine that operates in the cloud platform and simulates the health condition with integrated knowledge from both data-driven analytical algorithms and other available physical knowledge.[18]
Virtual substitutes of real-world objects consisting of virtual representations and communication capabilities make up smart objects acting as intelligent nodes inside the Internet of Things and services.[19]
The simulation of the physical object itself to predict the future states of the system.[20]
An integrated multi-physics, multi-scale, probabilistic simulation of an as-built system, enabled by Digital Thread, that uses the best available models, sensor information, and input data to mirror and predict activities/performance over the life of its corresponding physical twin.[21]
Digital twins are virtual representations of physical assets enabled through data and simulators for real-time prediction, optimization, monitoring, controlling, and improved decision-making.[22]

Table 1.

DT definitions in the literature suit the mining industry.

However one decides to define it, the most crucial key to the DT concept is a synergistic multi-way coupling between the physical asset, data collection, computational models, and the decision-making process [23, 24, 25, 26].

According to M. Grieves & Vickers [17], DT could be used in all of the lifecycle phases of the physical asset. To realize the evolving nature of the DT model and its full utilization, DT must be created in the design phase, used in the production of its physical twin, provided support during its operation, and stored all information in the disposal stage [27].

For this purpose, the terms DTP (DT Prototype) and DTI (DT Instance) are defined to detail DT’s application in various sections of lifecycle management. Like any other prototype, DTP is used in the design phase (no physical asset exists) as a testing platform to ensure that the proposed design meets all the requirements on tolerance limits and safety features while analyzing it in different scenarios [17].

During the operation stage, any physical system shows “predicted and unpredicted” behaviors; some of them are “desirable,” and others are “undesirable.” DTP could help point out the UUs (“unpredicted undesirables”) in the design process, enabling modifications to the system. Also, the general types of these UU behaviors could be identified, which helps incorporate protective measures into the system. Unlike physical prototypes, DTP failures are not costly and are not a “one and done” situation [17, 28].

On the other hand, a DTI (Digital Twins Instance) would model its physical twin’s as-built condition for its lifecycle (from production to operation and disposal). The aim is to monitor the state of the product for diagnostic purposes and provide predictive insights into a range of possible future states. Also, at this stage, DT will continuously evaluate the results of DTP simulations and DTI predictions to evolve itself [17, 29].

As M. Grieves & Vickers [17] pointed out, the final disposal phase has not received as much attention as it must. Systems may retire, but their information could be retained and used later on a similar project at a meager cost. Also, as for sustainability efforts, DT offers a view of the entire history of the system so proper disposal can be planned without harming the environment.

Advertisement

2. Digital twins’ potential within the mining industry

Historically, the mining industry has always collected large amounts of data; however, there is a significant lack of effort and resources to analyze them. One of the critical obstacles to data analysis is the lack of communication and integration between the wide range of mining operations [30].

Exploring the correlations of seemingly independent processes and variables through data-driven models will enable managers and engineers to create a holistic approach to ensure optimized production. Admittedly, the quality of recorded data (past or present) has a significant role in the efficacy of this approach. However, a more pressing issue is the availability of representative data. Furthermore, data quality is a very fluid concept depending on specific points of view (noise, corruption, bias, etc.) [31, 32].

This section analyzes DT’s potential in mining engineering based on DT’s applications in various engineering and manufacturing disciplines. For instance, fleet management, an already complicated issue, has become even more elaborate with the introduction of autonomous or semi-autonomous machines to the mine site. This integration introduces problems such as unique health and safety issues and regulations, resource efficiency, human-machine interaction, worker training, maintenance optimization, and many more [15].

By considering the enormous expenses and risks related to a pilot project, DT could replace that by acting as a prototype in which realistic tests and simulations are carried out with close to zero cost. DT includes a data hub, simulation and analysis tools, and visualization platforms to enable appropriate designs and monitoring plans focusing on unknown areas. A few examples of DT’s application in mining include fleet management, Mine-to-Mill optimization (especially D&B), geotechnical digital twins (gDT), etc. [33]

When developing regulations, testing the efficiency of the mine layout, design, and magnitude of the restricted area and its access points, human-machine interactions, risk identification, and management processes, examining the efficacy of control systems, and monitoring, DT could help speed up (or completely replace) the process in the physical world.

By examining the related literature, the potential application of DT in the mining industry is divided into 1) Geotechnical, 2) Drilling and Blasting, and 3) Fleet Management.

2.1 Geotechnical DT

The challenge of rockfall, slope instability, tailings dam failure, etc., and their associated risks have haunted the mining industry for quite some time. It has been a significant cause of fatalities, serious injuries, and financial losses. Following the established empirical techniques is the most efficient (probably the only) approach to studying this problem. But they have their disadvantages; chief among them is the thorough and resource-exhausting geological studies [34].

Recent technological developments and their introduction into mining engineering applications open the door to updating the studies. Such renovations would mainly focus on automating geological mapping of the rock surface using unmanned aerial systems and big data analysis. Also, efforts will be taken to model the potential trajectory of the fallen rocks in a semantic 3D model based on machine learning algorithms [35].

UAV photogrammetry and LiDAR were used to gather point cloud data to build a 3D model of the rock slope and extract its geological features. Furthermore, a trajectory estimation model of rockfalls, also developed at UNR’s Mining Automation lab, uses the mass and origin of rockfalls to calculate the impact characteristics and simulate a rockfall’s energy changes during its fall [36, 37].

Both these developments rely on a combination of physics-based and data-driven models, necessitating continuous data flow. Moreover, the mine site is a dynamic environment, and changes happen daily; therefore, the information about the rock surface must be updated at timely and financially efficient intervals.

The empirical methods used for generations are our best bet at calculating the rockfall probability; however, the possibility of improving these techniques is also explored by using machine learning to extract correlations between rockfalls and the factors that affect them. One major challenge is the availability of historical data on rockfall cases.

Geotechnical digital twins of the mine site, capable of combining a high-fidelity 3D model of the site with semantic data received from monitoring, lab tests (including coefficients of restitution), and simulations, will present all this information in a single unified interface for interactive decision-making and developing a rockfall risk map.

2.1.1 The framework of “gDT”

Fan et al. [38] have explored the applicability of DT in disaster management. In their case, DT is used as a unifying platform for all the crisis information, to implement AI in the situation analysis, decision-making (including resource allocation), and cooperation of different parties; and to better understand the interactive effects of decisions and actions in disaster management. Figure 1 shows their ideology for “disaster city DT”; the first step is data acquisition. Disasters affect the data-gathering devices, too, so it would be possible to address the data depravity of disaster situations through AI and remote sensing technologies (as well as crowdsourcing and social sensing). Data will come from different sources; therefore, the next step is to integrate them (using AI algorithms like knowledge graphs and network embedding) to draw as much information (through ML) as possible. DT can help with the training and cooperation of responders in disasters; this capability is realized through “serious gaming environments” that also provide visualization. Analyzing the interaction of responders and the information they need would help with resource and task allocation. The iterative process allows the system to learn, grow, and provide predictive simulations.

Figure 1.

Disaster City digital twins components [38].

Typical sources of disaster information include remote sensing technologies such as satellites and UAVs, which provide images and point clouds through photogrammetry or LiDAR. UAVs are particularly useful in disaster situations as they are effortless and require little infrastructure support [38].

Data integration is probably the most challenging part of the DT system. Fan et al. [38] propose using knowledge graphs (as heterogeneous information networks) to address the issue by defining central identities and network embedding. Data integration effort would lead to an enriched deep learning algorithm for DT to characterize the contents of images and sensory information while providing predictive simulations.

Networked analysis (using the Bayesian network) within DT helps visualize each actor’s knowledge, tasks, and relationship with others (priority) to provide them with required additional data, resources, and cooperation opportunities [38].

Investigating a smart city’s resiliency in case of natural and human-induced disasters, Losier et al. [39] argue that DT coupled with immersive technologies would enable disaster management first to examine the city’s preparedness in case of an emergency using simulations and second to analyze the effectiveness of preventive or mitigating strategies, plans, and technologies.

The financial losses and fatalities of natural disasters have increased in the past two decades. Critical infrastructure (transportation, power, and water supply systems, hospitals, etc.) not designed for extreme conditions are most vulnerable in disaster situations. An example would be the power plants built near shorelines for water access [39].

Losier et al. [39] report that predictive simulations, mitigating technologies, and strategies are the most effective tools in managing any disaster event. For instance, practical forecasting efforts would result in a 35% reduction in annual flood damages, or highly resilient critical infrastructures (e.g., airports and power plants) would seriously limit or even prevent the consequences of an extreme event.

Focusing on the loose bits and pieces in a construction site that could become airborne during extreme winds, Kamari & Ham [40] have also examined DT’s application in disaster preparedness. The as-is condition of the construction site based on the point cloud data generated from UAV photographs helps identify and locate potential health and safety hazards. Threat identification is crucial as it is the prerequisite for mitigating efforts and training workers in simulated environments.

Point cloud (PC) segmentation clusters objects together, and an AI algorithm determines whether that object has the potential to become airborne or not. The semantic segmentation of PC data is either data-driven (which relies on trained datasets using supervised learning) or model-driven (based on hand-engineered cost functions that work like shape fitting or region-growing algorithms). Model-driven methods are computationally simple but do not produce robust results concerning geometrical variances and noisy PC data [40].

Kamari & Ham [40] use the SfM (Structure-form-Motion) framework in combination with Multi-View Stereo to generate the site’s 3D point cloud. The primary wooden material and its boundaries are identified in 2D images in the next step. Based on the information from 2D images, semantic segmentation (Convolutional Neural Networks) is performed on the location of identified objects using the camera’s intrinsic (focal length, distortion, etc.) and extrinsic (orientation and location) parameters. One of the shortcomings of this system is its inability to identify occluded objects (due to shading), which can be solved by developing a depth map from the different camera locations.

The RANSAC (Random Sample Consensus) algorithm is used for ground registration (plane fitting) based on segmented point cloud data. The point cloud data is then transferred to a cartesian system so that the z-axis would be parallel to the projection direction. A bounding box encloses all the pixels at the XY plane, and the average height of all the points representing that pixel is calculated. To compute the volume of objects, the square size of pixels in the bounding box is multiplied by the summation of average height values [40].

To calculate the risks associated with every identified object (or pile of objects), Kamari & Ham [40] use kinetic energy and critical wind speed (min. wind speed to lift objects into the air) based on object shape, size, and characteristics. Therefore, the wind speed is critical to decide whether an object would become airborne; at any given wind speed, objects are assigned different risk levels that determine priorities for preemptive measures to control loose objects.

Many mine sites are already using radars and LiDARs in addition to more traditional in-situ instrumentation to monitor the structural stability of the mine to predict instability-prone areas.

DT provides a realistic 3D model of the mine environment, which is not only a powerful visualization tool with accurate measurements (like slope height, angle, and bench width) but also contains semantic data about the geological profile (strata and features like faults and joints) which can be obtained from probe holes and automated surface mapping, by integrating all the data from monitoring sensors (both in-situ and remote) and having an up to moment knowledge of the structural health of the mine. One of the most valuable features of DT (considering the empirical nature of stability analysis in mining) is its ability to absorb historical data from the site under study and similar projects, enabling predictive simulations and dynamic decision-making.

Considering the similarities of modeling challenges and the lack of publications directly related to the mining industry, the literature on 3D modeling efforts in civil engineering projects is examined [41, 42].

An instance would be bridge modeling, which will later be used for operation and maintenance (O&M) purposes. There are multiple challenges when creating a 3D high-fidelity model for existing bridges. If the model is based on point cloud data, even though existing techniques could cluster points for each distinct object, building a 3D shape that would fit these clusters is not automated. It takes a lot of time because of the irregular shapes of objects (95% of the whole modeling process) [43].

Furthermore, the existing methods cannot quantify the spatial accuracy of resulting standardized descriptive models (i.e., Industry Foundation Classes). R. Lu & Brilakis [43] propose a slicing-based technique that generates the “geometric DT” of an existing reinforced bridge. Assuming proper object detection, the method would take labeled point clusters as input, extract geometric features, detect shapes, and fit an IFC object to each of them (features and shapes). Also, the Level of Detail (LoD) determines the precision of the model based on its application and requirements.

The generated 3D model is transformed into a point cloud to quantify the accuracy. The result is compared to the original point cloud data using “cloud-to-cloud” distance-based measurements. The results of implementing this method on the point cloud data of 10 bridges show an average modeling distance of 7.05 cm (while the manual methods achieve 7.69 cm). Still, it is much faster than current practices (modeling time of 37.8 s) [43].

Another example of DT’s application in bridge maintenance is the Bridge Maintenance System (BMS) introduced by Shim et al. [44], which uses a combination of UAV photogrammetry (for lateral and top surfaces) and Laser Scanning Cloud Data (for the bottom surface). BMS introduces the possibility of combining different techniques for gathering heterogeneous information and data from each physical asset or system component, modeling them, and combining them in the DT environment. They have also used image processing for crack detection and have demonstrated how the 3D model could be updated after each analysis to represent the up-to-minute as-is condition of the bridge.

H. Zhang et al. [45] have proposed a DT-based monitoring system for geological hazards. The evolution of the prediction methods for such hazards has led to a dynamic system capable of “real-time” analysis and prediction (using a Back Propagation Neural Network to calculate the probability of a hazard). Combining data from GIS, GPS, and various remote sensing technologies (indicating the status of the topography) with historical data (landslides, subsidence, collapses, and meteorology) provides a data visualization tool (based on GIS) and a predictive simulator of the behavior of the slope. Another advantage is that monitoring shortcomings would be identified and addressed using this system. This system would not only warn all the vehicles and people near unstable slopes (through the internet and the internet of vehicles) but also provide the opportunity to conduct mitigation and control strategies (Figure 2).

Figure 2.

The DT method is used for geological disaster warnings. Inspired by ref. [45].

As mentioned above, data fed to the DT includes environmental data, historical disasters and related information, and the vehicles in the area (through a monitored checkpoint) [45].

The visualization part of the DT is also used to guide and warn all the devices and vehicles in the area and the first responders. Based on this information, it would be possible to design appropriate monitoring plans focusing on unstable areas [45].

2.1.2 Risk analysis capabilities of “gDT”

Emergencies and accidents are an inseparable part of the mining site, and DT could help organize the first and secondary response efforts. “Smart City Digital Twins (SCDT)” is an idea developed by Ford & Wolf [46] for disaster management. They divide the disaster experience into four steps: mitigation & preparation (happening shortly before and during a disaster), response (within days and weeks), and recovery (months or years after the disaster). The experience is unique for every community and depends on the severity and the type of disaster, which demands specific plans for every community that would consider its characteristics.

As Ford & Wolf [46] point out, a disaster would destroy individual components and their interaction with each other. DT must be able to gather and combine information from the Smart City sensors first to visualize the current condition to help with emergency response and second to predict disaster evolution and the effectiveness of the response strategy.

Figure 3 shows DT’s iterative method to help with response optimization. Data is collected, response strategies are planned and tested, and decisions are made (based on the simulations), which leads to actions. The actions would affect the community; therefore, the model is updated, new plans are made, and the iteration continues until the desired state is achieved [46].

Figure 3.

The iterative response model is used by SCDT [46].

The most crucial difference between traditional disaster management and the proposed SCDT system is the close loop of data, decisions, actions, and condition updates. Also, the proposed method can model the disrupted system’s components and interactions. To realize the true potential of SCDT, the system must be able to conduct near real-time data acquisition, analysis, and simulations to provide timely feedback for decision-making [46].

In addition to all the benefits of SCDT in disaster management, Ford & Wolf [46] raise another compelling reason for developing SCDT for disaster management. They argue that as disaster management requires specifically defined information and is not concerned with the fidelity of the model (thus reducing the modeling time), it would be a great starting point and testbed for developing SCDT to its full potential: city planning and management.

The SCDT system has three types of components: parts that are the subject of monitoring and prediction efforts (shaded areas), sensors (data acquisition), and visualization and prediction models (Figure 4). After each iteration, the effect of the decision on the community’s “condition” (the availability of resources after a disruption) and “features” (overall infrastructure systems capacity) is assessed [46].

Figure 4.

Schematic of the SCDT system and its interaction with the community [46].

The monitoring data are essential in determining the potential hazards and the efficiency of mitigation measures installed at the site. Having all the data in a centralized space will enable further study of the risk factors and their effect individually and collectively.

2.2 Drilling and blasting DT

Mine-to-mill optimization depends highly on the efficiency of drilling and blasting (D&B) practices as it affects the downstream mineral beneficiation process. The research team at the University of Nevada’s Mining Automation lab investigates the potential of taking advantage of recent technological developments and their introduction to mining engineering, such as exploring the application of the Digital Twins (DT) methodology in D&B analysis. The innate complexity of D&B practices requires an equally universal analysis approach, considering the number of variables affecting it. The existing analysis techniques are based on simplified physics-based equations or data-driven models. However, DT can merge them to offer the universality and accuracy of data-driven models and the interpretability of physics-based models. Data is collected in several stages of D&B, including but not limited to measure-while-drilling data imported from smart drills, fragmentation data, crushers and mills performance, blast site characteristics, charging and timing information, blast design and geology, hole-logging, and laser-scanned profiles, drone images/videos, and operational KPIs. In a DT framework, all the data collected during D&B operations and downstream are linked under a common platform. Through data-driven modeling that depends on AI and machine learning, operators can begin to see correlations between different parameters that were unnoticeable using other modeling methods. Integrating these models into existing empirical or physical simulators would also help address the black-box approach of data-driven models. Once the DT model is formed, D&B designs will be analyzed in real time, and predictions on their performance (fragmentation, back break, fly rock, and air overpressure) will be made based on physics laws and historical data. In the day-to-day decision-making process, a D&B engineer could rely on DT for performance evaluation of the D&B program at their mine and their design capabilities over time. Various stakeholders can benefit from DT as it becomes a learning tool that can help guide their operations to be more consistent and predictable to produce repeatable desired D&B results.

2.2.1 The framework of a drilling and blasting DT

A drilling and blasting DT must be able to capture blast information and analyze it using the already existing key performance indicators (KPIs) to search for the factors affecting the outcomes by delving into the unobserved physics of the process.

The first step in realizing the DT is to understand the potential data sources; rock characteristics and bench design plans, drill rigs and supporting equipment, explosive material, and personnel are among the essential components controlling a D&B session.

The drilling and blasting DT will be a hub for a high-fidelity 3D model, including the rock mass’s geological features and drill holes, to offer a powerful simulation tool providing a holistic overview of the rockfall study and analyses.

Simulations will be based on a combination of data-driven and physics-based models to reduce uncertainties while improving the interpretability of the results. Importing historical data into the system and complementing it with state-of-the-art technologies makes it possible to make an objective approach capable of producing reproducible results.

The type of rock and minerals, clay content, water content, structural features like faults and joints, etc., must be included in the high-fidelity 3D model of the mine environment used in a DT. The current methods for acquiring this information must be analyzed further to understand their compatibility with each other and the DT methodology. Geological and geotechnical information and resource estimation data are produced and stored digitally using block models. These block models can be tied to the digital twins to update as more information becomes available dynamically. Based on this information, any design scenario is stored as CAD files that can be used by planning software and easily transferred to smart drill rigs in the field.

Drill rigs will be essential to a drill and blast digital twins from a data acquisition perspective and controlling the machines themselves. Smart drill rigs need input data (drill hole design patterns) from DT to work correctly, and at the same time, sensors installed on the machine will collect and send valuable information back to the DT. There is also a significant amount of geotechnical data that the drill rig can facilitate. The location of the drill head using GPS transmitters (with connected ground stations for accuracy), rotary speed and torque, compressed air pressure and flow, vertical and horizontal vibrations in drill string and mast, drill level, and mast angle with the vertical are just a few examples of the machine data [47].

Measure-while-drilling data, such as current drilled depth, feed force exerted, and rotary torque, which is common these days, will also help track the progress of data in a real-time manner, which is ideal for DT’s definition [48]. DT can use these measurements to determine secondary parameters such as rate of penetration and blast ability index. UNR’s mine automation lab has also produced a system that can use drone imagery to analyze the location of drill holes and compare them to the as-designed CAD file so the engineers can devise mitigating measures in loading and sequencing. This system’s advantage compared to the GPS is that it would work fine even on the deepest benches and would not need any ground stations.

Loading drill holes with explosives probably need more attention regarding digitization. It can be analyzed in the explosives and the loading process. The type of explosives used depends on the bench and rock conditions, the type of blast and its purpose, and the number of explosives used. Explosive consumption for every bench is well documented through blast reports and magazine inventory records [49]. This information must be digitized and stored within the DT. Safety measures are always the priority in any D&B stage, intensifying in the loading step. Safety records are also archived to ensure personnel safety and demonstrate adherence to regulations [48]. These records can be in analog or digital form, but they are well maintained, and their complete digitization could bring tremendous value to DT modeling and simulations.

During the loading process, one of the significant steps is to check for “hot holes” to avoid charging drill holes with high temperatures (+131°F). Remote moisture measurement and surface temperature analysis using UAVs could help [49]. Verifying each drill hole’s depth and the water level is a crucial step in designing the profile (stemming column’s length, the type and density of explosives, and air decking) that will be used to load the drill holes. All this information is recorded digitally on the powder truck and by the blaster [48].

Several instruments, such as high-speed cameras and seismographs, measure specific parameters (burden measuring, blast vibrations, fly rock, and air overpressure) during blasting and can determine how well the blast has been executed [47]. Other secondary parameters can be calculated within DT to analyze further a blast concerning the explosives used, including the powder factor.

After the blast, trusted KPIs within the industry are used to analyze the whole process and the efficacy of blast design.

Rock fragmentation is probably the most telling performance indicator, but the current methods to measure and determine this number are not always as efficient as hoped.

Several techniques can extract the fragmentation data in a much more efficient way. Among these are photographic methods to extract particle sizes in the muck pile, crusher monitoring, secondary blasting/breakage costs, and shovel monitoring. The crusher energy consumption, the size of the crushed product, the size and strength of the feed material, and the crusher throughput are collected from cameras and sensors installed in the machinery. As for the shovel, diggability is primarily based on the crowd armature voltage and current, the hoist armature voltage and current, the dipper trip relay, and the crowd/propel relay [47, 48]. Together, this information can help assess the fragmentation outcome of a blast inside the DT.

In the proposed framework, data from the physical twin and its components to the DT gets stored, processed, and analyzed. Based on the results of those analysis efforts, control orders are issued to control the process in the real world. To comply with industry needs, performance analysis will be mainly based on KPIs widely used by field engineers and managers worldwide.

As discussed in the previous section, by dividing KPIs into their corresponding stages of a D&B session, DT’s analysis of the process would be straightforward, making it easier to find the source of potential shortcomings. For instance, drilling performance could be analyzed using collar deviation, and down-the-hole variation could help engineers track the as-drilled status of drill holes and make the necessary adjustments before the blast and for future designs. To better understand the reasons behind these errors, parameters such as rate of penetration, drill bit life, gallons per feet drilled, gallons per engine hours, engine hours per compressor hours, and feet drilled per day will help distinguish between ground and machine’s effect on deviations from as-designed plan.

2.3 Fleet management

2.3.1 Human-machine interaction

One of the main areas of DT applicability is examining human-machine (human-robot) interactions. Malik & Bilberg [50] have presented a framework to manage this collaboration on a shop floor. They identify the challenges of designing “human-robot collaborative (HRC)” systems as defining the health and safety measures (including “collision detection”) and programming robots and repeating these steps every time the system is reconfigured. DT’s application, in this case, is not much different from other areas; it will offer a safe, virtual environment to conduct tests and validate the system at the design stage. However, it is much more crucial in these cases, considering that testing human-machine systems in real life would expose humans to unpredictable hazards. During operation, DT would again offer real-time monitoring of the physical twin and predictive simulations to identify potential mishaps. They used 3D CAD objects to build the digital twins and imported them into the Tecnomatrix Process software. These objects (including human models created using the Process Simulate human package) are configured based on the shop floor design and assigned specific tasks. The outcomes of the DT include 1) optimized task allocation for humans and robots so that downtimes and fatigue are as low as possible, 2) efficient shop floor configuration based on the results of four tests, namely “collision analysis” for safety and productivity purposes, “reach test” to find optimum locations, “placement test” a revision of the results of the first two tests, and “vision test” to understand human model POV and remove obstacles, and 3) offline programming and robot visualizations, reducing the testing time in the real world [51, 52].

Malik & Brem [53] present a DT framework for combining the efforts of humans and machines in the workplace. DT could help optimize the shop floor design by ensuring optimized integration, fast reconfiguration, and safety measures. Figures 5 and 6 show DT’s application in an HRC environment and its evolution as the project moves from design to operation and maintenance stages.

Figure 5.

DT’s application in an HRC environment [53].

Figure 6.

The evolution of DT as the project evolves [53].

Designing such collaborative environments needs to consider the optimum and safest location for robots, equipment, and humans to guarantee access, reduce cycle time, and avoid collisions. DT could provide a virtual environment in which various design scenarios could be tested; moreover, it could help in the development phase when a “mixed reality environment” would allow for the gradual realization of the most optimum design [53].

Malik & Brem [53] also explore the advantages of DT in the operational phase of HRC environments. First, human and robot abilities and skills, as well as their characteristics, must be identified; second, safety measures and their associated limitations are considered; and third, a dynamic task allocation is performed to minimize the idle times of the system. Robot programming is the most challenging and time-consuming part of any robot application; robots can be programmed in the virtual environment (DT). Once their desired operation is achieved, the results would be transferred to the physical world. This way, real-time simulations could be performed since the digital twins are already designed using robot-understandable language. Like any other implementation of DT, monitoring and performance analysis are also crucial.

2.3.2 Maintenance optimization

Ayani et al. [54] have used DT to reduce the “commissioning time” of machine overhauls or repairs. They argue that as machines age, their repair and renovation become more challenging as the manufacturers stop producing essential parts or technical details. There is also the fact that in the age of Industry 4.0, machines have to support connectivity and data-sharing technologies. Old devices will need several renovations to support such technologies. This idea has great potential in the mining industry as many mines still use old machines in their fleet, and many new ones begin with these machines as they require less capital. But as new mining technologies are introduced, these machines must be updated to be compatible with new, more sophisticated ones and support digital connectivity.

As Ayani et al. [54] point out, machines will be out of operation during repairs, commissioning, and verification, whether they need repair or are just being renovated. Several factors affect this time: machine complexity, resources (time, human, parts), and technical support. Obsolete or missing technical data is the most problematic hindrance, especially if the company conducting the reconditioning is not the manufacturer. Using a DT, a significant portion of the work could be done offline without taking the machines out of operation.

While acknowledging the high cost of building a DT, Ayani et al. [54] state that in the long run, it will pay off by reducing the real commissioning time, energy, and resources required and its risks. Also, DT can be extended to monitor the physical system’s whole life cycle. Even the new machines must be validated before they begin operation. These on-site tests could be risky, as failures could delay the procedure or induce unforeseen costs. By introducing “virtual commissioning,” Ayani et al. [6] propose that a substantial portion of the verification tests could happen in a digital space if there were a representative model of the machine and its environment. They use an “emulation” software, Simumatik3D, to build a digital replica of the whole system. This software replicates different parts individually, then connects them, and has libraries for essential “electric, pneumatic, and hydraulic” components, allowing the building of a model that reflects its physical counterpart’s geometry, physics, and movements. It also supports integrating industrial logic systems to control the digital model.

The reconditioning project at a foundry is Ayani et al.’s [54] case study. They were limited to 4 weeks to complete the repair and renovation process, including the tests, commissioning, and reassembling of the new PLC (Programmable Logic Controller) and HMI (Human Machine Interface) for the system. The chart below outlines their strategy to take full advantage of DT in that project.

The first step in the algorithm Ayani et al. [54] used to incorporate DT in their renovation project is studying the machine, its components, its interaction, and all existing documents to understand the machine and build its digital model. The second stage is to design and test the new parts and software intended to be installed on the system in the virtual space. Then, all these renovations and repairs are integrated into the system, and their functions get tested as a unit during the virtual conditioning. Until now, the machine has been in operation, and there has been little (if any) disturbance. Repairs occur in the last phase, and accurate commissioning and testing happen. Ayani et al. [54] report a shorter time (60%) required for this process section using DT and virtual commissioning.

Guivarch et al. [55] have implemented DT to collect additional data about the loads applied to a helicopter’s main rotor. The number of sensors and instrumentation needed for gathering the necessary data mid-operation would be enormous, making DT’s application an advantage. Limited sensors would be attached to the rotor to acquire and send data during flight. This information will be inserted into DT and its “multibody simulation” system to predict the loads on all other parts and joints.

Guivarch et al. [55] define DT as a closed loop between the simulations and operational data. They test a laboratory rotor assembly under dynamic and static loadings to train the model. Several sensors (far more than in-flight) are attached to this assembly to acquire the load data. Only the data acquired from the sensors used in flight are the inputs of DT, and other sensors would verify the results and modify the model and sensor positions. The validity of the results depends on the modeling technique and its level of detail.

Okita et al. [56] argue that a change in environment would cause the machine to update its operation and settings accordingly. Therefore, the initial design parameters might not be applicable after a certain amount of service time or if the system experiences an extreme condition that would require drastic changes. They emphasized economic and safety variables as they might be affected by natural disasters, competition, market fluctuations, etc.

The proposed “DTAS” (Digital Twins of Artifact Systems) would enable designers and users of the system to share information during the life cycle of a product or system. This way, they could address these variations and implement necessary system design and functionality changes. A monitoring system (at different scales), a detailed model of the system to reflect the monitoring data, and predictive analysis tools (to test every change before implementing it) are all parts of the DTAS [56].

The benefits of such a system include efficient monitoring plans, identifying weak points of the system in case of a disaster (by conducting simulations), and designing (and testing) mitigation plans in case of an emergency by replicating the disaster in virtual space. Another advantage of using DT is that system builders could avoid over-design and offer flexibility so parts of the system or its operation could be changed if necessary (DT would monitor and predict if there is a need for change) [56].

“DTAS” helps obtain up-to-date technical information and validate new PLC and HMI. One of the most essential capabilities of the system is providing a shared, unified information hub for all parties involved. All parties will have their own needs and priorities, which requires different types of models to be included in the DT (or one unified model) [56].

To investigate the impacts of economic factors on the mining industry’s maintenance efforts, Savolainen & Urbani [57] use a combination of Discrete Event Simulation (DES) for maintenance and System Dynamics (SD) for “managerial cash flow (CF)” to “co-simulate” maintenance decisions and their financial implications. This “co-simulation” without DT would not be possible as they are fundamentally different and require data transfer.

The economic variable in the Savolainen & Urbani [57] model is the monthly metal price and the up-to-minute maintenance model of the mine fleet, which determine the operation’s profitability. The goal is to use DT as an optimization tool (decision-making based on a set of heuristic rules) for maintenance activities. The maintenance for one component depends on “time to failure” (TTF) distribution and the cost of corrective maintenance. But simultaneously, a multi-component system could be managed as one unit (since components are interdependent).

There are two types of maintenance strategies, “condition-based (CBM)” and “time-based (TBM),” and both include two activities: “corrective” (CM) and “preventive” (CM), which is justified by potentially lower costs than the failure of the system. DT would also include simulation tools to estimate the machine’s failure time and suggest preventive maintenance [57].

Savolainen & Urbani [57] divide the mine systems into “server-queue” (shovels, dump sites, and workshop) and “agents” (trucks) that are free to move between the “FIFO servers” (first in, first out). In addition to being a server, shovels are subject to aging and failure and could unexpectedly fail; in this case, they are transferred to the workshop with the highest priority. Each time a truck completes an event (between two servers), the model decides whether to send it to the workshop or another server (not the workshop). As for the shovels, this inspection happens after each truck loading to determine whether they need to be sent to preventive maintenance. Ultimately, the maintenance policy would depend on the allowable time for trucks (agents) and shovels to work without any preventive maintenance. Determining these time thresholds (a function of scale and mean failure intervals) depends on minimizing the cost function associated with the operation (maintenance and failure) while satisfying a bounding condition, the minimum production target.

The results of the maintenance module are transferred into a CF model that also includes metal price (simulated using a specific economic model to represent market uncertainty). While it is possible to put both simulations into one software, there are practical challenges. Mine managers are hesitant to share sensitive data, and it’s expensive to develop this kind of software [57].

To examine the conformity of the model to the logical trade-offs between PM and CM costs, Savolainen & Urbani [57] also conducted a sensitivity analysis focused on the available resources (trucks, shovels, and workshops) which yield complicated results because different configurations might yield the exact cost. However, as Savolainen & Urbani [57] point out, several other aspects are worth considering. Chief among them are the capital investment requirement, required human resources, and the integrity of the assembled system.

Savolainen & Urbani [57] conducted two experiments to examine the whole DT approach’s validity. The first one is a fleet design based on three different maintenance policies (“max preventive,” “max corrective,” and “balanced”). The results show that the highest profits and mill utilization are acquired for different fleet configurations in case of a “run-to-failure” maintenance policy. The optimal number of trucks and shovels is 8–9 trucks and 1–2 shovels, resulting in 90% + mill utilization. In the next step, they used cost distribution to reflect the uncertainties related to the costs of CM and PM (5 and 1.1 times the assumed value, respectively) and rerun the simulations. The results are similar to the first version: no PM, nine trucks, one shovel, and 98% mill utilization. The writers argue that this is due to the limited capacity of workshops. But logically, the best maintenance policy has to find the optimal maintenance intervals that would keep the number of CMs to a minimum.

The other experiment conducted by Savolainen & Urbani [57] is to find the optimal PM intervals by running the simulations for different numbers of OM events per week of simulations. The range is from 1 PM event every 8 weeks to 1 PM event per week. The results indicate that nine trucks, one shovel, and 1 PM per week would yield the highest profits with 91.6% mill utilization. However, a closer examination of the results indicates that managerial decision-making must be implemented as there is another option with fewer trucks and slightly longer PM intervals that would yield profits in the range of the most profitable option.

Another important finding of this optimization simulation is the optimal number of trucks per shovel based on the workshops available and PM intervals. DT helps combine a DES optimization and a CF model to find the best fleet design and maintenance policy for each mine based on the available resources and the mineral value. However, issues must be addressed, including the scalability of complex models and data, data integration and transfer between the modeling disciplines, and verifying the results [57].

2.3.3 Autonomous vehicles (AVs)

As for autonomous vehicles, the existing literature has a positive outlook toward applying DT technology in their control and maintenance. Almeaibed et al. [58] mention the applicability of DT in data transformation, data processing, and analysis of AVs (defined as vehicles that can sense all their surroundings and move without human intervention) and intelligent manufacturing systems. But they also acknowledge the vulnerability of such systems to malicious attacks and propose a protective method to assure data security within the digital twins’ paradigm (Figure 7).

Figure 7.

DT supports autonomous vehicles’ safety and security [58].

The argument is that if a high-fidelity virtual model represents the physical system’s past, current, and future states, it would be plausible to identify data anomalies by the apparent deviation of the virtual and real-world objects. According to the researchers, the five steps of AV design include “perception” (environmental sensibility using LiDARs and Radars), “localization” (the exact coordinates of the vehicle), “planning” (based on the first two steps, what is the course of action), “control” (execution of the “plans”), and “system management” (an HMI to provide the user with necessary information such as malfunctions within the system) [58].

These steps must be thoroughly examined, and DT could provide a safe environment for such examinations. When describing the elements of the DT for AVs, they mention the initial state in which the DT has all the manufacturing data; when the system becomes “operational,” the sensors begin collecting data and feeding the digital twins, which examines the soundness of the information received and analyzes them, then either the DT makes an automatic decision (highest level of autonomy) or reports the results back for decision-making. The biggest challenge in realizing this automatic response is creating a system capable of receiving vast amounts of data (high levels of depth and detail) in real time (or near real time), analyzing it, and reporting the results [58].

In the cybersecurity issue of AVs, the biggest challenge is the enormity of the system, which comprises many sensors and connection networks, and each one of those sensors could be a potential attack point. Dependency on cloud computing, the use of diverse open-source programming languages without validation, and the growing hesitation of manufacturers to share resources and technologies are among the notable focal points when it comes to the cybersecurity of AVs [58].

As examples of the possible attacks, Almeaibed et al. [58] point out attacks on the individual sensors, internal (passwords and keys), and external networks (a “vehicle to everything” attack that affects any system providing connection to the cloud which could lead to “distributed denial of service” attacks), and “spoofing attacks” on GPS for instance that would give the attacker the ability to manipulate the vehicle by sending erroneous data instead of accurate GPS information. DT’s role in this section would be to provide the testbed for different attack scenarios and failure modes to identify system vulnerabilities and behavior in case of malicious actions. Understanding the system’s shortcomings would help provide security and safety countermeasures during the design phase. One possible identification method is to detect one or two sensors sending irregular information compared to other monitoring instruments.

Energy consumption, especially electric vehicles, is another important focal point in fleet management. Zhaolong Zhang et al. [59] have proposed a machine learning-based “energy consumption model” trained using real-world data and can simulate future events. As multiple correlated factors (internal, external, and human) affect energy consumption, DT would offer the chance to take each of those into account for a more holistic overview of the vehicle’s state. As mentioned earlier, the model is trained using historical data, and expected behavior is set in place; if the machine starts deviating from the norm, for instance, that will trigger the system to check different sensors and analyze them to find the source of the potential problem.

As machines nowadays have multiple sensors reading the vehicle’s current state, this information, coupled with environmental data (temperature, wind, traffic, road condition, etc.), is collected and transmitted to a cloud server via wireless transmission (5G). The aggregated data would be analyzed and compared to historical data to update the digital model and identify anomalies to make necessary adjustments to the physical twin. All the analyses, simulations, and results are visualized using a semantic 3D model to make decision-making much more manageable. Each time the model makes a prediction, the results are compared to the real world to examine the accuracy of the simulations, and the results of such comparisons are used to evolve and optimize the model [59].

Zhaolong Zhang et al.’s study [59] indicates that data obtained from the vehicle encompassed parameters such as “state of charge (SOC),” “instantaneous output voltage and current of the battery,” and battery temperature range. The readings happened at a constant temperature and 80 “instances” to define a “compensation coefficient,” which indicates the difference between the physical and digital twins. They could attain the relationship between the two by calculating the compensation coefficient at different temperatures. By knowing that correlation, it would be possible to simulate the vehicle’s future state and estimate the system’s errors. While they acknowledge that other factors affect the measurements and errors (such as wind direction, slopes, road conditions, etc.), this is an example of the correcting process necessary to optimize the model.

Some scenarios would require a subjective study of crucial parts of a vehicle. For example, monitoring autonomous vehicles’ braking systems is a significant health and safety measure. Magargle et al. [60] have used DT for heat monitoring and predictive maintenance of an automotive braking system. They use physics-based models with assigned failure modes and train them (using ML algorithms) to simulate the future state of “degradable” components. By using reduced-order models for parts of the braking system, it would be possible to detect the effect of each element. By comparing it to regular rates, Magargle et al. [60] have defined a “wear rate” to predict maintenance time and to account for faulty conditions (like when the ABS is disconnected). DT, in this case (like many others), is an integrated space that enables receiving data from the asset, monitoring, and controlling it using the inherent HMI.

Another potential application for DT is the health monitoring of machines and equipment and their specific parts of interest. Tuegel et al. [61] have used DT to study the structural evolution of aircraft in its life cycle. They introduce a vision (concept) that, if realized, would make it possible to build an “ultrahigh-fidelity model” that would generate 1 petabyte of data in a one-hour simulated flight, and if this simulation is repeated for 1000 hrs. The amount of data for predictive analysis and finding the “unpredicted undesirables” would be enormous. Remember that all these analyses happen before the real aircraft’s first real-life flight. When the aircraft becomes operational, the same model is used for monitoring, risk analysis, and controlling its physical twin.

Though the modeling and computation limitations only allow for a portion of Tuegel et al.’s [61] concept to become operational, it is still an enormous progress. One important issue, like any other simulation, is that there will be errors and simplifications that would affect the results. Still, as DT is a dynamic system, by conducting tests and training it (using data-driven models and ML algorithms), it would be able to quantify the errors and give a reliable operation prediction.

The direct relationship between the twins as one receives the data, stores it, processes it, conducts studies, and finally presents the data in the form of information back to the other, as well as storing each step of the structural evolution of the system and its components and updating itself (thus avoiding worst-case scenario simulations that leads to over-design), is the main advantage of DT technology in machines health monitoring. DT also offers the opportunity to use various modeling methods and data types for each part of the same machine and integrates all of them in a unified space (Figure 8) [61].

Figure 8.

Airplane life cycle management schematic using DT [61].

2.3.4 Road maintenance

Road design and maintenance are vital operations at the mine site; the slightest problems would cause severe damage to giant machines carrying heavy loads and disrupt their optimum operation [62].

Machl et al. [63] argue that agricultural machines are becoming heavier and faster, and the distance they must travel due to the increasing size of the farms is also increasing. This development has many similarities to the mine sites. To identify the existing roads to modernize them and design a needs-based core road network, they use a DT of the cultivated landscape, which contains the necessary spatiotemporal data and analytical tools that enable monitoring and assessment of the impact of each design scenario. The DT is designed to perform state-wide monitoring and analysis of all farm-field transportation roads and their relations. In this case, the main challenge is the distributed and heterogeneous data across stakeholders.

Compared to conventional landscape models, DT requires a broader scope of information that includes semantic information about topographic objects, like agricultural parcels, their crop rotation, farms, and the cultivated area connected to a specific road segment and the evolution of the landscape over time. Semantic enrichment allows assigning additional information to individual objects, facilitating an integrated view of the cultivated landscape as a complex system. The source of semantic data could be external information systems, real-time sensor readings, or the output of complex analytical methods. Machl et al. [63] have introduced methods for estimating the agricultural transportation paths, describing the geometry of agricultural parcels, and detecting, analyzing, and documenting the changes in the cultivated landscape at the level of individual elements.

Steyn & Broekman [64] focused on asphalt pavement surface monitoring of the University of Pretoria’s Road network using DT and simultaneous localization and mapping by optical and mobile sensors. Their efforts consist of micro and macro twinning for monitoring the road surface temperature and pavement conditions, respectively. They have used LiDAR, UAV, and traffic counting AI systems to quantify road geometry and infrastructure utilization. They have also used photogrammetric reconstruction technologies (based on neural networks), proprietary environmental condition sensors, and commercial temperature sensors to acquire the surface texture and environmental conditions as well as surface temperature at four locations and different surface materials. The combination of advanced environmental monitoring data, physical data, and surface temperature data provides management information that can assist in maintaining such roads, enabling early remedial action before further deterioration and allowing more appropriate maintenance budget allocations.

Steyn & Broekman [64] quantified the surface texture and the road’s performance. Pavement roughness, which affects long-term vehicle maintenance, operating costs, and fuel consumption, is defined as the irregularities of a pavement surface as measured over a fixed distance between two points in space. Measuring pavement surface texture is primarily automated, thanks to comparatively accurate laser-based scanning systems.

Ormándi et al. [65] have considered a scenario in which the details of a vehicle’s suspension are not provided or not accessible. Then, using a combination of virtual and real sensors to compare the real-life and simulated measurements iteratively, they came up with an estimation for the suspension characteristics. A car model is built using IPG Carmaker based on the software’s default settings. Then, a “meta-heuristic optimization tool such as genetic algorithm (GA)” is used to compare the real-life sensor readings with the virtual measurements (available in the IPG environment) based on a defined cost function. The measurements include “vertical acceleration,” velocity of the car, and the location coordinates when traveling over a speed bump. Based on the GA analysis results, the virtual model’s suspension parameters get updated until an acceptable threshold is reached.

For further validations, Ormándi et al. [65] used OpenCRG to create a road surface with great details (scanned by lasers) and import it to the IPG carmaker’s environment. The results show acceptable agreement with real-life measurements. However, since the IPG carmaker is incapable of real-time synchronization of the car and its model (the exact position of the car cannot be set), they have used Unity3D for another round of tests. Importing the OpenCRG files to the Unity3D environment is challenging but achievable. Here, the real-time connection between the car and its model is achieved based on the location readings. Building upon the calculated suspension parameters from GA, the simulations in the Unity3D environment are also comparable to the accurate sensor readings.

Advertisement

3. Conclusion

The Digital Twins (DT) approach holds immense promise for the mining industry. It enables real-time monitoring and modeling of various mining processes, from geotechnical aspects to drilling and blasting, fleet management, and energy efficiency. This technology empowers mining operators to optimize production, reduce delays, and enhance worker safety (Figures 9 and 10).

Figure 9.

Schematic of the proposed mining DT.

Figure 10.

Details of the relationship between physical and digital twins (D&B DT).

Root cause analysis facilitated by DT’s data-driven models provides a powerful tool for identifying and rectifying operational shortcomings. However, implementing DT in mining faces challenges like data management, cybersecurity, integration, communication, and personnel training. Successful adoption will require collaboration among mining companies, technology providers, and stakeholders.

Ongoing research must focus on refining the DT framework, improving data acquisition and storage methods, and enhancing joint extraction algorithms. Additionally, advancements in surface temperature and moisture content acquisition systems using RGB images will benefit bench characterization and slope stability studies.

The ideal candidate for DT addresses specific business challenges or explores new opportunities. Identifying one involves improving asset performance, reducing downtime, and increasing production or throughput. Assessing DT readiness requires ranking business impact against technical feasibility, considering factors like infrastructure connectivity, data access, appetite for change, and organizational maturity. Technical readiness is assessed through factors like Operational Technology (OT) complexity, IT complexity, analytics, systems complexity, and project readiness. High business impact and technical readiness are prerequisites for assessing the Minimum Viable Product (MVP). Implementing the DT methodology requires technological capabilities, talent, collaborations, partnerships, alliances, and an organizational structure conducive to agility and innovation. Tools like The Oracle IOT DT framework, Azure DT, IBM DT Exchange, Ansys Twin Builder, and GE’s Predix Platform are utilized for building DTs. The development process must be agile and structured and consider technical and business impacts. Key considerations include defining problems addressed by DT, identifying beneficiaries, stating the unique value proposition (UVP), determining top capabilities, addressing external challenges, creating a business case, specifying key metrics, planning for integration, and assessing costs [4].

Ultimately, the potential of Digital Twins technology to revolutionize mining, improve efficiency, reduce costs, and minimize environmental impact cannot be understated. By embracing DT, mining companies can bolster competitiveness and embrace a sustainable future. Continued research and development efforts are essential to unlock the digital twins’ full potential in the mining industry.

References

  1. 1. Jiang F, Ma L, Broyd T, Chen K. Digital twin and its implementations in the civil engineering sector. Automation in Construction. 2021;130(July):103838. DOI: 10.1016/j.autcon.2021.103838
  2. 2. Zheng X, Lu J, Kiritsis D. The emergence of cognitive digital twin: Vision, challenges and opportunities. International Journal of Production Research. 2022;60(24):7610-7632. DOI: 10.1080/00207543.2021.2014591
  3. 3. Negri E, Fumagalli L, Macchi M. A review of the roles of digital twin in CPS-based production systems. Procedia Manufacturing. 2017;11(June):939-948. DOI: 10.1016/j.promfg.2017.07.198
  4. 4. Nath SV, Schalkwyk P, v., Isaacs, D. Building Industrial Digital Twins: Design, Develop, and Deploy Digital Twin Solutions for Real-World Industries Using Azure Digital Twins. Packt Publishing; 2021
  5. 5. Söderberg R, Wärmefjord K, Carlson JS, Lindkvist L. Toward a digital twin for real-time geometry assurance in individualized production. CIRP Annals - Manufacturing Technology. 2017;66(1):137-140. DOI: 10.1016/j.cirp.2017.04.038
  6. 6. Duan JG, Ma TY, Zhang QL, Liu Z, Qin JY. Design and application of digital twin system for the blade-rotor test rig. Journal of Intelligent Manufacturing. 2023;34(2):753-769. DOI: 10.1007/S10845-021-01824-W
  7. 7. Kor M, Yitmen I, Alizadehsalehi S. An investigation for integration of deep learning and digital twins towards construction 4.0. Smart and Sustainable Built Environment. 2023;12(3):461-487. DOI: 10.1108/SASBE-08-2021-0148
  8. 8. Ma C, Gui H, Liu J. Self learning-empowered thermal error control method of precision machine tools based on digital twin. Journal of Intelligent Manufacturing. 2023;34(2):695-717. DOI: 10.1007/S10845-021-01821-Z
  9. 9. Ghahramanieisalou M, Sattarvand J. Applications of Digital Twin Technology in Productivity Optimization of Mining Operations. In: APCOM. Rapid City, SD, USA: SME; 2023. pp. 1-17
  10. 10. Alizadehsalehi S, Yitmen I. Digital twin-based progress monitoring management model through reality capture to extended reality technologies (DRX). Smart and Sustainable Built Environment. 2023;12(1):200-236. DOI: 10.1108/SASBE-01-2021-0016/FULL/HTML
  11. 11. Ramu SP, Boopalan P, Pham QV, Maddikunta PKR, Huynh-The T, Alazab M, et al. Federated learning enabled digital twins for smart cities: Concepts, recent advances, and future directions. Sustainable Cities and Society. 2022;79:103663. DOI: 10.1016/J.SCS.2021.103663
  12. 12. Perno M, Hvam L, Haug A. Implementation of digital twins in the process industry: A systematic literature review of enablers and barriers. Computers in Industry. 2022;134:103558. DOI: 10.1016/J.COMPIND.2021.103558
  13. 13. Agrawal A, Fischer M, Singh V. Digital twin: From concept to practice. Journal of Management in Engineering. 2019;38(3):06022001
  14. 14. Li L, Lei B, Mao C. Digital twin in smart manufacturing. Journal of Industrial Information Integration. 2022;26:100289. DOI: 10.1016/J.JII.2021.100289
  15. 15. Autiosalo J, Ala-Laurinaho R, Mattila J, Valtonen M, Peltoranta V, Tammi K. Towards integrated digital twins for industrial products: Case study on an overhead crane. Applied Sciences. 2021;11:683. DOI: 10.3390/app11020683
  16. 16. Botín-Sanabria DM, Mihaita S, Peimbert-García RE, Ramírez-Moreno MA, Ramírez-Mendoza RA, Lozoya-Santos J d. Digital twin technology challenges and applications: A comprehensive review. Remote Sensing. 2022;14:1335. DOI: 10.3390/RS14061335
  17. 17. Grieves M, Vickers J. Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. In: Kahlen J, Flumerfelt S, Alves A, editors. Transdisciplinary Perspectives on Complex Systems. Cham: Springer; 2017. DOI: 10.1007/978-3-319-38756-7_4
  18. 18. Lee J, Lapira E, Yang S, Kao A. Predictive manufacturing system-trends of next-generation production systems. IFAC Proceedings Volumes. 2013;46(46):150-156. DOI: 10.3182/20130522-3-BR-4036.00107
  19. 19. Schluse M, Rossmann J. From simulation to experimentable digital twins: Simulation-based development and operation of complex technical systems. In: 2016 IEEE International Symposium on Systems Engineering (ISSE). Edinburgh, UK; 2016. pp. 1-6. DOI: 10.1109/SysEng.2016.7753162
  20. 20. Gabor T, Belzner L, Kiermeier M, Beck MT, Neitz A. A simulation-based architecture for smart cyber-physical systems. In: 2016 IEEE International Conference on Autonomic Computing (ICAC), Wuerzburg, Germany. 2016. pp. 374-379. DOI: 10.1109/ICAC.2016.29
  21. 21. Kraft EM. The US air force digital thread/digital twin – Life cycle integration and use of computational and experimental knowledge. In: 54th AIAA Aerospace Sciences Meeting. AIAA 2016-0897. Jan 2016. pp. 1-22. DOI: 10.2514/6.2016-0897
  22. 22. Rasheed A, San O, Kvamsdal T. Digital twin: Values, challenges and enablers from a modeling perspective. IEEE Access. 2020;8:21980-22012. DOI: 10.1109/ACCESS.2020.2970143
  23. 23. Kamble SS, Gunasekaran A, Parekh H, Mani V, Belhadi A, Sharma R. Digital twin for sustainable manufacturing supply chains: Current trends, future perspectives, and an implementation framework. Technological Forecasting and Social Change. 2022;176:121448. DOI: 10.1016/J.TECHFORE.2021.121448
  24. 24. Kapteyn MG, Knezevic DJ, Huynh DBP, Tran M, Willcox KE. Data-driven physics-based digital twins via a library of component-based reduced-order models. International Journal for Numerical Methods in Engineering. 2019;2020:1-18. DOI: 10.1002/nme.6423
  25. 25. Kapteyn M, Willcox KE. Predictive digital twins: Where dynamic data-driven learning meets physics-based modeling. In: Darema F, Blasch E, Ravela S, Aved A, editors. Dynamic Data Driven Applications Systems. DDDAS 2020. Lecture Notes in Computer Science. Vol. 12312. Cham: Springer; 2020. pp. 1-10. DOI: 10.1007/978-3-030-61725-7_1
  26. 26. Wang Y, Xu R, Zhou C, Kang X, Chen Z. Digital twin and cloud-side-end collaboration for intelligent battery management system. Journal of Manufacturing Systems. 2022;62:124-134. DOI: 10.1016/J.JMSY.2021.11.006
  27. 27. Kane GC, Palmer D, Philips Nguyen A, Kiron D, Buckley N. Strategy, Not Technology, Drives Digital Transformation. No. 57181. MIT Sloan Management Review & Deloitte; 14 Jul 2015. Available from: http://sloanreview.mit.edu/projects/strategy-drives-digital-transformation/
  28. 28. Grieves MW. Virtually intelligent product systems: Digital and physical twins. Complex Systems Engineering: Theory and Practice. 2019;256:175-200
  29. 29. Grieves M. Digital Twin: Manufacturing Excellence through Virtual Factory Replication. White Paper. 2014;1:1-7
  30. 30. Read J, Stacey P. Guidelines for Open Pit Slope Design. Clayton: SCIRO Publishing; 2010. DOI: 10.1071/9780643101104
  31. 31. Choi SH, Park KB, Roh DH, Lee JY, Mohammed M, Ghasemi Y, et al. An integrated mixed reality system for safety-aware human-robot collaboration using deep learning and digital twin generation. Robotics and Computer-Integrated Manufacturing. 2022;73:102258. DOI: 10.1016/J.RCIM.2021.102258
  32. 32. Aheleroff S, Xu X, Zhong RY, Lu Y. Digital twin as a service (DTaaS) in industry 4.0: An architecture reference model. Advanced Engineering Informatics. 2021;47(August 2020):101225. DOI: 10.1016/j.aei.2020.101225
  33. 33. Zhao L, Bi Z, Hawbani A, Yu K, Zhang Y, Guizani M. ELITE: An intelligent digital twin-based hierarchical routing scheme for Softwarized vehicular networks. IEEE Transactions on Mobile Computing. 1 Sep 2023;22(9):5231-5247. DOI: 10.1109/TMC.2022.3179254
  34. 34. Norris JE, Stokes A, Mickovski SB, Cammeraat E, van Beek R, Nicoll BC, et al. Slope Stability and Erosion Control: Ecotechnological Solutions. Dordrecht: Springer; 2008. DOI: 10.1007/978-1-4020-6676-4
  35. 35. Wyllie DC. Rock Fall Engineering. 1st ed. CRC Press; 2015. DOI: 10.1201/b17470
  36. 36. Battulwar R. Automatic Extraction of Joint Characteristics from Rock Mass Surface Point Cloud Using Deep Learning. Reno: University of Nevada; 2021
  37. 37. Peik B. A New Three-Dimensional Rockfall Trajectory Simulator for Open-Pit Mines. Reno: University of Nevada; 2020
  38. 38. Fan C, Zhang C, Yahja A, Mostafavi A. Disaster City digital twin: A vision for integrating artificial and hu-man intelligence for disaster management. International Journal of Information Management. 2021;56(December 2019):102049. DOI: 10.1016/j.ijinfomgt.2019.102049
  39. 39. Losier LM, Fernandes R, Tabarro P, Braunschweig F. The Importance of Digital Twins for Resilient Infrastructure. Vol. 102019. Available from: https://cdn2.webdamdb.com/md_A6HafPVAhHf0.jpg.pdf
  40. 40. Kamari M, Ham Y. AI-based risk assessment for construction site disaster preparedness through deep learning-based digital twinning. Automation in Construction. 2022;134(July 2021):104091. DOI: 10.1016/j.autcon.2021.104091
  41. 41. Joshi DR, Eustes AW III, Rostami J. Testing lunar regolith characterization algorithms with simulated subsurface samples and digital twin data. In: 52nd Lunar and Planetary Science Conference. 2021 (LPI Contrib. No. 2548)
  42. 42. Gomez Llerena JA, Ghahramanieisalou M, Sattarvand J. Tackling geotechnical risks in tailings dams using high-resolution UAV imaging and advanced image processing. In: Geo-Risk 2023: Innovation in Data and Analysis Methods, Arlington, Virginia, USA. ASCE; 2023. pp. 220-228. DOI: 10.1061/9780784484975.024
  43. 43. Lu R, Brilakis I. Digital twinning of existing reinforced concrete bridges from labelled point clusters. Automation in Construction. 2019;105(February):102837. DOI: 10.1016/j.autcon.2019.102837
  44. 44. Shim CS, Dang NS, Lon S, Jeon CH. Development of a bridge maintenance system for prestressed concrete bridges using 3D digital twin model. Structure and Infrastructure Engineering. 2019;15(10):1319-1332. DOI: 10.1080/15732479.2019.1620789
  45. 45. Zhang H, Wang R, Wang C. Monitoring and warning for digital twin-driven mountain geological disaster. In: 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China. 2019. pp. 502-507. DOI: 10.1109/ICMA.2019.8816292
  46. 46. Ford DN, Wolf CM. Smart cities with digital twin Systems for Disaster Management. Journal of Management in Engineering. 2020;36(4):04020027. DOI: 10.1061/(asce)me.1943-5479.0000779
  47. 47. Gokhale BV. Rotary Drilling and Blasting in Large Surface Mines. 1st ed. CRC Press; 2010. DOI: 10.1201/b10972
  48. 48. Steir JF. ISEE Blasters’ Handbook. 18th ed. International Society of Explosives Engineers (ISEE); 2011. Available from: https://books.google.com/books?id=Jr-gmgEACAAJ
  49. 49. Hustrulid WA. Blasting Principles for Open Pit Mining. Vol. 1. A. A. Balkema; 1999. Available from: https://books.google.com/books?id=bc2YXfXvmh4C
  50. 50. Malik AA, Bilberg A. Digital twins of human robot collaboration in a production setting. Procedia Manufacturing. 2018;17:278-285. DOI: 10.1016/j.promfg.2018.10.047
  51. 51. Kousi N, Gkournelos C, Aivaliotis S, Lotsaris K, Bavelos AC, Baris P, et al. Digital twin for designing and reconfiguring human-robot collaborative assembly lines. Applied Sciences. 2021;11:4620. DOI: 10.3390/app11104620
  52. 52. Stączek P, Pizoń J, Danilczuk W, Gola A. A digital twin approach for the improvement of an autonomous mobile robots (AMR’s) operating environment—A case study. Sensors. 2021;21:7830. DOI: 10.3390/s21237830
  53. 53. Malik AA and Brem A. Man, Machine and Work in a Digital Twin Setup: A Case Study. 2020. Available from: http://arxiv.org/abs/2006.08760
  54. 54. Ayani M, Ganebäck M, Ng AHC. Digital twin: Applying emulation for machine reconditioning. Procedia CIRP. 2018;72:243-248. DOI: 10.1016/j.procir.2018.03.139
  55. 55. Guivarch D, Mermoz E, Marino Y, Sartor M. Creation of helicopter dynamic systems digital twin using multi-body simulations. CIRP Annals. 2019;68(1):133-136. DOI: 10.1016/j.cirp.2019.04.041
  56. 56. Okita T, Kawabata T, Murayama H, Nishino N, Aichi M. A new concept of digital twin of artifact systems: Synthesizing monitoring/inspections, physical/numerical models, and social system models. Procedia CIRP. 2019;79:667-672. DOI: 10.1016/j.procir.2019.02.048
  57. 57. Savolainen J, Urbani M. Maintenance optimization for a multi-unit system with digital twin simulation: Example from the mining industry. Journal of Intelligent Manufacturing. 2021;32(7):1953-1973. DOI: 10.1007/s10845-021-01740-z
  58. 58. Almeaibed S, Al-Rubaye S, Tsourdos A, Avdelidis NP. Digital twin analysis to promote safety and security in autonomous vehicles. IEEE Communications Standards Magazine. 2021;5(1):40-46. DOI: 10.1109/MCOMSTD.011.2100004
  59. 59. Zhang Z, Zou Y, Zhou T, Zhang X, Xu Z. Energy consumption prediction of electric vehicles based on digital twin technology. World Electric Vehicle Journal. 2021;12(4):1-13. DOI: 10.3390/wevj12040160
  60. 60. Magargle R, Johnson L, Mandloi P, Davoudabadi P, Kesarkar O, Krishnaswamy S, et al. A simulation-based digital twin for model-driven health monitoring and predictive maintenance of an automotive braking system. In: Proceedings of the 12th International Modelica Conference, Prague, Czech Republic. 2017. 15-17 May 2017. DOI: 10.3384/ecp1713235
  61. 61. Tuegel EJ, Ingraffea AR, Eason TG, Spottswood SM. Reengineering aircraft structural life prediction using a digital twin. International Journal of Aerospace Engineering. 2011;2011:14. Article ID 154798. DOI: 10.1155/2011/154798
  62. 62. Marais WJ, Thompson RJ, Visser AT. Managing mine road maintenance interventions using mine truck on-board data. In: The International Conference on Surface Mining 2008 ‘Challenges, Technology, Systems and Solutions’. Johannesburg, South Africa: The Southern African Institute of Mining and Metallurgy; 2008. pp. 3-14
  63. 63. Machl T, Donaubauer A, Kolbe TH. Planning agricultural core road networks based on a digital twin of the cultivated landscape. Journal of Digital Landscape Architecture. 2019;2019(4):316-327. DOI: 10.14627/537663034
  64. 64. Steyn WJ, Broekman A. Development of a digital twin of a local road network: A case study. Journal of Testing and Evaluation. 2021;51(1):20210043. DOI: 10.1520/jte20210043
  65. 65. Ormándi T, Varga B, Tettamanti T. Estimating vehicle suspension characteristics for digital twin creation with genetic algorithm. Periodica Polytechnica Transportation Engineering. 2021;49(3):231-241. DOI: 10.3311/PPTR.18576

Written By

Milad Ghahramanieisalou and Javad Sattarvand

Submitted: 21 January 2024 Reviewed: 23 February 2024 Published: 03 June 2024