3D parallel surface-borehole TEM forward modeling with multiple meshes Chong Liu , LiZhen Cheng, Bahman Abbassi Université du Québec en Abitibi-Témiscamingue, 445 boul. de l'Université, Rouyn-Noranda, QC J9X5E4, Canada abstract article info Article history: Received 18 March 2019 Received in revised form 15 November 2019 Accepted 27 November 2019 Available online 28 November 2019 This paper presents the schemes of high-performance computing developed for surface-borehole TEM forward modeling. The parallelization starts from the Message Passing Interface (MPI), which allows the individual pro- cessor to perform complicated message delivery to other processors. Open Multi-Processing (OpenMP) is then used to avoid the time-consuming resulting from communications between processors. To combine their advan- tages, a hybrid MPI/OpenMP parallel programming is developed, in which it reduces memory usage, load imbal- ance and communication costs. The 3D forward modeling is supported by multiple meshes to let different frequency ranges used to convert the frequency domain to time domain have different meshes and model dimen- sions, therefore, the computational cost is reduced through compressing the mesh elements. The enhancement in computing performance with the improvement of 3D model resolution is found to be about 13 times speedup in the tests. © 2019 Elsevier B.V. All rights reserved. Keywords: Surface-borehole TEM forward modeling Edge-based nite element Parallelization Multiple meshes Computing performance 1. Introduction With the requirements of the deep mineral exploration, efcient time-domain electromagnetic (TEM) measurement techniques in dril- ling are developed for deep metallic deposits prospection, In a TEM sur- vey, a transmitter loop emits a primary EM eld using a specic source waveform. The propagating primary eld interacts with rocks and gen- erates the secondary EM eld around underground conductors. Three- dimensional (3D) numerical modeling of TEM data aims to simulate this induction phenomenon and reconstruct a physical property model in the form of 3D conductivity distributions. EM measurements on the surface integrated with borehole data sets allow acquiring infor- mation about 3D distribution of electrical conductivity of subsurface. The key question is how to extract useful geological information from those EM observations. This is a form of the inverse problem, in which one aims to recover underground physical properties (conductivities) from surface EM measurements. Conventionally, the data inversion is based on iterative forward modeling through least square methods to reduce the distance between measured and simulated EM data. There- fore, a fast and accurate 3D forward modeling algorithm can help to de- velop an efcient 3D inversion code. For 3D forward modeling, nite element methods (FEM) can take into account the complex geological environment by discretizing the earth model into polyhedrons. Common node-based nite element methods (Jin, 2002; Um et al., 2012) and edge-based nite element method (Nédélec, 1980; Graglia et al., 1997; Midtgård, 1997; Li, 2002; Ilic and Notaros, 2003; Sugeng and Raiche, 2004; Sun and Nie, 2008; Da Silva et al., 2012) are popular numerical methods in EM forward modeling because of their better model discretization for complicated topography and irregular shapes and high accuracy. However, the more complex the underground environment, the more time consum- ing the FEM calculations are. This problem is due to several factors, in- cluding a large number of cells in FEM, multi-frequencies used in the forward modeling, forming the stiffness matrix for solving the second- ary eld, solving large matrix equations and computing secondary eld with a large number of survey stations. Computational cost (time and memory) can be efciently reduced by reducing the number of mesh elements, but this strategy leads to lower resolutions in EM data simulation and consequently lower quality physical property models during the inverse modeling. Parallel compu- tation is an alternative solution and has been successfully applied in 3D marine controlled-source EM data simulation (Puzyrev et al., 2013; Cai et al., 2015; Reyes et al., 2015), 2D/3D magnetotelluric (MT) forward modeling and inversion (Tan et al., 2006; Wang et al., 2015), 3D long- offset transient electromagnetic eld simulation (Commer and Newman, 2004), and 3D airborne TEM data inversion (Haber and Schwarzbach, 2014). Two available platforms for parallel programming are graphics pro- cessing unit (GPU) and central processing unit (CPU) (Grama et al., 2003; Wilkinson and Allen, 2004). GPU has more than hundreds of Journal of Applied Geophysics 172 (2020) 103916 Corresponding author. E-mail addresses: chong.liu@uqat.ca (C. Liu), lizhen.cheng@uqat.ca (L. Cheng), bahman.abbassi@uqat.ca (B. Abbassi). https://doi.org/10.1016/j.jappgeo.2019.103916 0926-9851/© 2019 Elsevier B.V. All rights reserved. Contents lists available at ScienceDirect Journal of Applied Geophysics journal homepage: www.elsevier.com/locate/jappgeo