Technobius, 2026, 6(1), 0097, DOI: https://doi.org/10.54355/tbus/6.1.2026.0097

Technobius

https://technobius.kz/

e-ISSN

2789-7338

 

 

Review

A review of the current state of 3D subsurface cartography for geotechnical ground characterization

 

Nurgul Alibekova1,2, Aleksej Aniskin3, Kairat Mukhambetkaliev4, Indira Makasheva1,2,*

 

1Solid Research Group, LLP, Astana, Kazakhstan

2Department of Civil Engineering, L.N. Gumilyov Eurasian National University, Astana, Kazakhstan

3Department of Civil Engineering, University North, Varaždin, Croatia

4JSC “Kazakhstan Road Research Institute”, Astana, Kazakhstan

*Correspondence: indira.yergaliyeva@gmail.com

 

 

Abstract. Three-dimensional subsurface mapping is increasingly used to turn ground investigation evidence into continuous 3D models of soil layers and geotechnical properties that support decisions in ground characterization. This review summarizes the current state of 3D mapping for geotechnical purposes using a citation-guided selection based on Litmaps, resulting in a curated set of 27 studies. The literature is organized into a taxonomy aligned with the Litmaps clustering and grouped into six approach families: implicit or potential-field structural modeling, voxel models and voxel workflows, uncertainty-oriented stochastic and inversion or segmentation approaches, automation and urban-scale pipelines, geophysics-driven 3D modeling, and interoperability and engineering delivery. The comparative analysis shows that practical outcomes depend not only on the final model type but also on workflow design, including data harmonization, modeling rules, and update procedures. Credibility depends on validation methods matched to the target variable and on uncertainty outputs that show where the model is strongly supported by data and where it relies on extrapolation. The review also highlights model comparability as a key operational issue, since mismatches between neighboring models can arise from inconsistent conceptual rules, discretization differences, uneven data density, and asymmetric auxiliary constraints. The paper concludes with an applicability framework, including a decision flow and compact criteria, to guide selection, evaluation, and reporting of decision-grade 3D subsurface mapping products for geotechnical ground characterization.

Keywords: 3D subsurface mapping, geotechnical ground characterization, voxel models, implicit structural modeling, uncertainty and validation.

 

1. Introduction

 

Urban development increasingly depends on reliable knowledge of the shallow subsurface, yet the ground remains one of the least directly observable components of the built environment. Standard site-investigation outputs, borehole logs, laboratory tests, and a limited number of interpretive cross-sections are indispensable, but they are not, by themselves, designed to represent spatial continuity and variability in 3D, nor to communicate uncertainty as an explicit “map product”. Geotechnical properties are well known to exhibit substantial natural variability; random-field perspectives emphasize that this variability can be concisely characterized by parameters such as the coefficient of variation and correlation structure (scale of fluctuation), which implies that sparse point sampling may miss meaningful lateral changes [1]. The need for improved subsurface characterization is amplified in urban settings where subsurface conditions evolve under interacting natural and anthropogenic drivers. For instance, leakage from underground water and drainage pipelines is widely recognized as a contributor to soil erosion, voiding, sinkhole formation, and ground collapse risk, motivating stronger monitoring and mapping practices for near-surface strata [2]. In parallel, the broader geotechnical literature has increasingly framed modeling as only one part of decision-making, emphasizing that the principal obstacle to effective application is often the lack of explicit, transparent accounting for uncertainty (covering soil properties, stratification, and model performance) rather than the absence of computational tools [3]. Against this background, 3D subsurface cartography for geotechnical purposes can be defined as the set of methods that transform heterogeneous subsurface observations (e.g., boreholes and complementary information) into spatially continuous, easy to explore, and visualizable 3D representations of soil stratigraphy, together with uncertainty where possible. In this sense, “cartography” is not merely a 3D visualization; it is a structured representation that supports repeatable cartographic operations: slicing cross-sections, extracting virtual boreholes, generating volumes and surfaces, and communicating confidence in a form that is interpretable by multiple stakeholders. Early developments in attributed 3D geological modeling explicitly highlighted the value of 3D models of the shallow subsurface as an organizing framework for disparate data and interpretations, bridging conceptual models and decision-support needs [4]. However, the move from 2D documentation to 3D cartographic products introduces distinct methodological and practical challenges. Subsurface datasets are typically sparse, clustered, and inconsistent in classification and quality; resulting 3D models are sensitive to interpretive assumptions and parameter choices. More fundamentally, geological and geotechnical models contain both aleatory components (natural variability) and epistemic components (limited knowledge and interpretation uncertainty). Evidence-based treatments of geological model confidence and uncertainty emphasize that credibility depends on making these uncertainties visible and traceable, rather than embedding them implicitly in a single deterministic model [5]. This becomes especially important when 3D subsurface maps must be compared or integrated across neighboring sites (e.g., urban blocks or infrastructure corridors), where inconsistent conceptual assumptions and boundary conditions can lead to discontinuities and mismatches that undermine trust in the mapped products. Over the last two decades, 3D subsurface modeling has expanded from specialist geological modeling toward broader urban and engineering applications. Studies coupling 3D geological models to urban underground-space evaluation illustrate how 3D representations can directly influence planning-oriented assessments, while also implicitly stressing the dependence of such applications on the quality and credibility of the underlying models [6]. Yet, despite rapid progress, the relevant literature remains distributed across engineering geology, geophysics, GIS, and computational modeling communities. As a result, readers seeking to use 3D subsurface maps for geotechnical characterization face practical questions that are not consistently answered: what constitutes a “decision-grade” 3D subsurface cartographic product, which approach families are best suited for which data regimes and scales, how uncertainty should be evaluated and communicated, and how interoperability and updateability should be treated when models become living datasets rather than static figures.

This review is therefore necessary to consolidate the current state of 3D subsurface cartography for geotechnical purposes and to synthesize how modeling choices translate into the credibility, usability, and applicability of subsurface information. The objectives of this review are to: (1) develop a taxonomy of major approach families used for 3D subsurface mapping in geotechnical contexts; clearly compare their representations and inference strategies; (2) synthesize how different approaches transform available observations into 3D stratigraphic and/or property models; (3) critically analyze validation practices and uncertainty treatment, emphasizing how credibility is established and communicated; (4) examine comparability and integration challenges, including the practical issue of mismatches between neighboring 3D models along shared boundaries; and (5) translate the evidence into practical guidance (an applicability matrix and reporting checklist) to support rigorous selection, evaluation, and communication of 3D subsurface cartographic products for geotechnical subsurface characterization.

 

2. Methods

 

This review was designed as a targeted synthesis of the current state of 3D subsurface cartography for geotechnical purposes, with emphasis on subsurface information products and their applicability for ground characterization. The methodological workflow follows a transparent evidence synthesis logic for identifying, screening, and including sources, consistent with established reporting guidance for systematic reviews [7].

The literature set was constructed using Litmaps as a citation-guided discovery environment, where relationships among candidate papers were explored through forward and backward citation links while simultaneously viewing publication recency and citation prominence in the network (Figure 1). This supported both coverage of influential foundational contributions and identification of more recent clusters that represent active research directions.

 

Figure 1 – Litmaps citation network of the 27 selected papers, showing citation links and the axes of publication recency and citation count

 

Starting from seed papers relevant to 3D subsurface mapping, the Litmaps network was iteratively expanded by adding candidate studies connected through citation links. The expansion was continued until additional inclusions became repetitive or outside the defined scope. A two-stage screening process was applied. First, titles and abstracts were screened to remove studies clearly unrelated to 3D subsurface cartography for geotechnical characterization. Second, full texts were screened to confirm that each retained study contributed directly to at least one of the following dimensions, namely representation of the subsurface in 3D, inference engines used to estimate layers and or properties, uncertainty and validation practice, data fusion, automation and scalability, or interoperability and delivery of models for downstream use.

To enable consistent cross-paper comparison across engineering geology, geophysics, GIS, and computational modeling communities, each included paper was coded using a structured extraction form. The extracted information captured the conditioning data used for model building, including boreholes and logs, laboratory or in situ measurements, geological maps and cross-sections, geophysical constraints, and prior conceptual rules, together with the target variables represented in the model, such as stratigraphic interfaces, lithological classes, and continuous property fields. The coding also recorded the 3D representation type, the inference engine used for estimation or classification, how discontinuities such as faults or unconformities were handled when relevant, the validation strategy and reported metrics, the presence and form of uncertainty quantification and communication, and the final deliverables including file formats, GIS or web visualization, database structures, and pathways for BIM or FE oriented use. In addition, explicit limitations and future work statements were extracted verbatim and later grouped into recurring themes to support a gap analysis grounded in author reported evidence.

To align the review structure with the Litmaps tool [8] visualization (Figure 1), each paper was assigned one primary tag corresponding to one of six approach families, namely geophysics driven 3D modeling that is AEM or resistivity led (teal green legend), interoperability and engineering delivery including GIS BIM and FE ready models (purple legend), automation and urban scale pipelines (red legend), uncertainty oriented approaches including stochastic modeling and inversion or segmentation (blue legend), voxel models and voxel workflows (green legend), and implicit or potential field structural modeling (orange legend). The one paper, one primary tag rule was used to preserve the readability of the Litmaps figure, while cross-cutting characteristics were captured through the extracted coding fields and are discussed comparatively in the synthesis.

The synthesis proceeds by first constructing a taxonomy that compares representations and inference engines across the six approach families, then analyzing workflow level differences in data regimes, automation level, scalability, and delivery, and finally evaluating credibility through validation and uncertainty practice with specific attention to comparability across neighboring models and boundary mismatch issues. Recurring limitations and future work themes across the 27 papers are then consolidated into an applicability matrix and a reporting checklist intended to support transparent selection, evaluation, and communication of 3D subsurface cartographic products for geotechnical subsurface characterization.

 

3. Taxonomy of 3D subsurface cartography approaches

 

The current section establishes the taxonomy used in this review and links it directly to the six approach families defined in the Litmaps-based tagging scheme presented in Figure 1. The purpose of the taxonomy is not only to classify methods by their technical implementation, but also to compare them in terms of the type of subsurface information they produce and the extent to which those products are suitable for geotechnical ground characterization. Across the 27 reviewed papers, the strongest distinctions are observed in the way the subsurface is represented in 3D, the inference engine used to estimate geometry and or properties, and the intended delivery format for downstream use. These distinctions are evident across the implicit structural modeling line (for example [9], [10], [11], [12]), the voxel-model line (for example [13], [14], [15], [16], [17], [18], [19]), uncertainty and stochastic approaches (for example [20], [21], [22], [23], [24]), automation and urban-scale pipelines (for example [25], [26], [27], [28], [29]), geophysics-driven modeling (for example [30], [31], [32]), and interoperability or engineering-delivery focused studies (for example [33], [34], [35]).

To make the comparison explicit and reusable in the later sections, the taxonomy is summarized in Table 1, which organizes the reviewed literature by approach family, dominant representation, common inference logic, typical outputs, and principal geotechnical relevance.

 

Table 1 – Taxonomy of cartography approaches and their geotechnical information products

No.

Approach family

Typical representation

Dominant inference / modeling logic

Typical input data regime

Typical output products

Main geotechnical relevance

1

Implicit or potential-field structural modeling [9], [10], [11], [12]

Continuous scalar or vector fields, implicit surfaces, structural surfaces

Structural interpolation constrained by interfaces, orientations, stratigraphic rules, and topological relations

Borehole contacts, interpreted interfaces, orientations, geological rules, sometimes auxiliary sections

Stratigraphic surfaces, solids, structural frameworks, exportable meshes

Strong for coherent layer geometry and structural continuity; useful when stratigraphic architecture is the primary target

2

Voxel models and voxel workflows [13], [14], [15], [16], [17], [18], [19]

Regular 3D grids (voxels), categorical and or continuous volumetric fields

Interpolation, classification, rule-based assignment, gridding, voxel population and refinement

Boreholes and logs, property values, lithology classes, open borehole databases, optional geophysical constraints

Lithology volumes, property volumes, 3D queries, cross-sections, statistics, web-ready blocks

Strong for volumetric querying, spatial statistics, and practical GIS workflows; useful for multi-attribute subsurface mapping

3

Uncertainty, stochastic, inversion and segmentation approaches [20], [21], [22], [23], [24]

Multiple realizations, probabilistic class fields, uncertainty volumes, stochastic structures

Geostatistical estimation, stochastic simulation, Bayesian or likelihood-based inference, inversion and segmentation frameworks

Boreholes, geophysics, priors, training labels, model constraints, synthetic benchmarks

Uncertainty maps or volumes, probability of class or interface, ensembles, confidence indicators

Essential for credibility assessment, risk-aware interpretation, and transparent communication of model confidence

4

Automation and urban-scale pipelines [25], [26], [27], [28], [29]

Mixed representations, often workflow-centric and conversion-based

Automated processing chains, database-driven model generation, rule-guided reconstruction, GIS geoprocessing

Large borehole archives, heterogeneous logs, urban datasets, standardized inputs, workflow scripts

Scalable 3D urban subsurface models, repeatable pipelines, batch products, updateable datasets

Important for operational deployment at city or corridor scale and for repeatable mapping under practical constraints

5

Interoperability and engineering delivery (GIS-BIM, FE-ready) [33], [34], [35]

Exchange-ready 3D ground models, converted analytical grids and meshes, integrated information structures

Model translation, schema alignment, geometric conversion, data exchange and delivery workflows

Existing geological or geotechnical models plus target platform requirements (GIS, BIM, FE)

FE-ready models, GIS/BIM integrated datasets, transferable ground information objects

Critical for practical use of subsurface models in engineering workflows and digital project environments

6

Geophysics-driven 3D modeling (AEM or resistivity led) [30], [31], [32]

Voxel or layered geophysical-geological models, interpreted geophysical volumes

Geophysical inversion interpretation, cognitive geological interpretation, integration of geophysics with boreholes and prior geology

AEM, resistivity, geophysical sections or volumes, calibration boreholes, geological interpretation

Large-area 3D geological models with stronger areal coverage than borehole-only workflows

Particularly valuable for extending spatial coverage and constraining regional to corridor-scale subsurface structure

 

To complement the taxonomy, Figure 2 presents representative examples of 3D subsurface products associated with the approach families. This gallery is not to compare visual quality across studies, but to illustrate the different forms of outputs that are ultimately delivered to users, including structural surfaces, voxel products, uncertainty representations, and engineering-oriented models.

 

a) Approach family No. 1 [12]

b) Approach family No. 2 [13]

c) Approach family No. 3 [22]

d) Approach family No. 4 [29]

e) Approach family No. 5 [35]

f) Approach family No. 6 [32]

Figure 2 – Representative gallery of typical 3D subsurface products across the approach families

 

The taxonomy in Table 1, together with the representative output gallery in Figure 2, shows that the reviewed studies are not separated by a single methodological boundary. Instead, they differ across three intersecting dimensions. The first dimension is the representation of the subsurface, which ranges from surface-based structural frameworks to full volumetric discretization. The second dimension is the inference strategy, which ranges from deterministic and rule-based interpolation to explicitly probabilistic and stochastic formulations. The third dimension is the intended destination of the model, which ranges from scientific interpretation and visualization to engineering delivery through GIS, BIM, or finite-element compatible formats. This is why some papers sit close together in the Litmaps network while still serving different roles in practice, for example [27] and [28] in automation-oriented workflows, or [34] and [35] in interoperability and engineering delivery.

A key implication of this taxonomy is that no single approach family is universally optimal for geotechnical subsurface characterization.

Implicit structural models are often advantageous when the principal task is reconstruction of stratigraphic architecture with strong structural coherence ([9], [10], [11], [12]), whereas voxel workflows are often more suitable when the objective includes volumetric statistics, grid-based spatial querying, and property field analysis across many attributes ([13], [17], [18], [19]).

Uncertainty-oriented approaches are indispensable when confidence reporting and risk-aware interpretation are central ([20], [22], [23], [24]), but they may require higher methodological overhead and clearer communication protocols for practical adoption.

Geophysics-driven models can substantially improve areal continuity in settings where boreholes are sparse ([30], [31], [32]), while interoperability-focused studies highlight that even technically strong 3D models can remain underused unless they are deliverable in formats and schemas compatible with engineering workflows ([33], [34], [35]).

The taxonomy also clarifies the role of automation as a separate family rather than merely a feature of other methods. In the reviewed corpus, automation and urban-scale pipeline studies emphasize repeatability, throughput, and updateability under real-world data heterogeneity ([25], [26], [27], [28], [29]). This makes them especially relevant for municipal databases, corridor projects, and iterative mapping contexts where new boreholes or revised logs are added over time. At the same time, automation does not eliminate the need for human geological judgement. Instead, it shifts expert intervention toward rule definition, quality control, and interpretation of ambiguous or conflicting evidence. The taxonomy developed in this section serves as the analytical backbone for the next three sections.

 

4. From data to model: workflows, automation, and scalability

 

This section examines how the reviewed studies move from heterogeneous subsurface observations to usable 3D cartographic products, with emphasis on workflow logic, automation, and scalability.

The reviewed corpus shows that the practical difference between methods is often not only the final representation, but also the sequence of transformations applied to the data before modeling begins. In the workflow-oriented literature, this sequence is treated as a first-class methodological component rather than a hidden preprocessing step [25]. In urban-scale studies, the workflow itself becomes the main contribution because repeatability and updateability are necessary when large borehole archives are involved [28]. In GIS-centered implementations, the workflow is explicitly operationalized as a geoprocessing chain so that model generation can be rerun after parameter or data updates [29].

Before presenting the detailed comparison, the general workflow pattern identified across the reviewed studies is summarized in Figure 3 below. This workflow is a synthesis produced in this review and is intended to provide a common reference for comparing otherwise different modeling traditions.

 

Figure 2 – Generalized data-to-model workflow for 3D subsurface cartography

 

A recurring workflow requirement is the transformation of heterogeneous records into a consistent internal representation before any interpolation or reconstruction is performed. In migration-oriented modeling work, conversion from legacy 2D geological materials into a structured 3D workflow is presented as a central technical challenge because the original data were not prepared for direct 3D reconstruction [26]. A similar conversion focus is visible in studies that reconstruct 3D geological objects from 2D interpretation elements, where the sequence of preprocessing and geometric organization strongly influences the quality of later model generation [25]. In practical voxel-building studies using open borehole archives, data standardization is emphasized because the consistency of lithology coding directly controls the reliability of voxel class assignment [18].

The reviewed papers also show that workflow design is tightly coupled with the intended model representation. In implicit structural modeling, the workflow is typically organized around interfaces, orientations, and stratigraphic relationships because these are the principal constraints used to reconstruct continuous geological surfaces [9]. In structurally complex settings, the workflow remains strongly interpretation-led even when computational tools are advanced, because the ordering and consistency of geological constraints determine whether the resulting implicit framework is geologically plausible [10]. Recent implicit urban workflows move toward more automation, but they still retain rule-based geological guidance to preserve stratigraphic coherence when processing dense drilling records [11]. The potential-field extension proposed for unconformable interfaces further demonstrates that workflow changes are sometimes required specifically to handle difficult contact relationships that standard implicit implementations represent poorly [12].

In voxel workflows, the workflow emphasis shifts from surface reconstruction to volumetric discretization, category allocation, and multi-attribute storage. National and regional voxel model initiatives show that the workflow must support repeated integration of new observations and revised interpretations over time, which makes database structure and rule consistency as important as the interpolation step itself [13]. Comparative voxel model evolution studies also indicate that later generation workflows improve usability by refining update procedures and improving the organization of voxel attributes rather than only changing the final visualization [15]. Practical shallow subsurface voxel modeling studies demonstrate that voxel workflows can be tailored to engineering use by aligning grid construction and attribute assignment with user-oriented map products [14]. Later developments in the same line reinforce the importance of uncertainty-aware voxel update procedures when models are reused for planning and subsurface management [17]. Compression and representation studies show that voxel workflow performance can also depend on storage and rendering choices, which affects how feasible large 3D models are in routine use [16]. In engineering delivery contexts, voxel-based ground information can be structured to support downstream numerical analysis, which requires workflow planning from the beginning rather than post hoc conversion [19].

Automation appears in the reviewed corpus as a spectrum rather than a binary property. Some studies automate only repetitive preprocessing tasks while preserving manual interpretation at key stratigraphic decisions [26]. Other studies automate larger portions of the pipeline, including data ingestion, classification, and model construction, especially when the target is urban-scale production from many boreholes [28]. A GIS geoprocessing workflow can be highly automated in execution while still requiring expert selection of interpolation settings and classification logic, which highlights that automation shifts effort toward parameterization and quality control rather than removing expert judgement [29]. Hybrid modeling and rendering workflows further show that automation may be introduced for performance reasons, such as splitting regular voxel structures to improve display efficiency and web usability [27].

The data regime strongly influences which workflow architecture is feasible. Geophysics-led studies demonstrate that large-area coverage can be improved when airborne or ground geophysical information is integrated with boreholes, which changes the workflow from sparse point interpolation toward iterative interpretation and integration of areal or volumetric geophysical evidence [30]. AEM-based cognitive voxel modeling explicitly shows a workflow in which geophysical models are interpreted jointly with boreholes to construct geological volumes at scales that would be difficult to support with boreholes alone [31]. In offshore or infrastructure-oriented large-area modeling, multi-source integration is used to compensate for uneven direct investigation density, which again makes the workflow itself a critical methodological contribution [32].

Uncertainty-oriented methods also reshape the workflow because validation and uncertainty outputs are not appended only at the end, but influence model-building choices. Empirical Bayesian kriging is relevant in this context because it embeds uncertainty estimation within the interpolation process and therefore affects how property mapping workflows are configured and evaluated [20]. Hidden Markov random field style geological segmentation introduces a workflow in which probabilistic labeling and spatial context are integral to model construction, rather than optional post-processing [21]. Multi-source uncertainty analysis in 3D structural modeling shows that workflow stages must preserve source-specific uncertainty information if the final confidence assessment is to remain meaningful [22]. Synthetic-geology-based uncertainty studies further demonstrate that benchmark design and controlled experiments can be used as workflow components for testing how modeling choices influence uncertainty in outputs [23]. Recent geostatistical lithological modeling frameworks also indicate that predictive uncertainty is central to the model product and should be reflected in the workflow from data preparation onward [24].

Scalability is another major differentiator across the reviewed papers. In site-scale workflows, the primary objective is often a detailed local representation, which allows more manual intervention and tuning if data density is sufficient [10]. In city-scale or regional workflows, throughput, standardization, and updateability become dominant design constraints because the number of boreholes and the heterogeneity of records make bespoke modeling impractical [13]. Urban archive-based pipelines illustrate that the uneven spatial distribution of boreholes requires workflow logic that can detect data density contrasts and prevent overconfident extrapolation into poorly constrained zones [28]. GIS-based voxel workflows at site scale provide an intermediate case in which methods are detailed enough for geotechnical property mapping, but still structured in a way that supports reproducibility and later transfer to larger deployments [29].

To summarize the workflow comparison in a compact form for later use in further sections, Table 2 condenses the main distinctions in data regime, automation emphasis, and scaling behavior across the six taxonomy families.

 

Table 2 – Workflow and scalability patterns across the approach families

Approach family*

Dominant workflow bottleneck

Typical automation emphasis

Typical scale tendency

Main risk if workflow is weak

1

Constraint consistency and stratigraphic rule definition

Partial automation with expert-guided interpretation

Site to district

Geologically implausible continuity or interface conflicts

2

Data harmonization and voxel assignment consistency

Batch gridding, classification, attribute updating

Site to regional

False continuity and class artifacts in sparsely constrained cells

3

Preserving uncertainty information through model steps

Integrated probabilistic estimation and testing

Site to regional

Confidence outputs that do not reflect true workflow assumptions

4

Standardization of heterogeneous archives

Data ingestion, repeatable processing chains, batch generation

District to city

High throughput with low interpretive quality control

5

Schema conversion and geometry translation

Format conversion and delivery pipelines

Site to corridor / project system

Technically valid models becoming unusable in target platforms

6

Joint interpretation of geophysics and borehole constraints

Semi-automated inversion plus interpretation workflows

District to regional / corridor

Overinterpretation of geophysical patterns without sufficient calibration

* Enumeration of the approach family corresponds to Table 1.

 

The scalability patterns in Table 2 indicate that workflow robustness is a prerequisite for applicability. A method that performs well in a small expert-controlled demonstration can become unreliable when transferred to a municipal archive if data harmonization and automated quality control are not built into the pipeline [28]. Conversely, a highly automated pipeline may produce visually complete 3D outputs that are difficult to trust if the workflow does not preserve explicit uncertainty or validation checkpoints [22]. This tradeoff explains why several recent studies combine automation with stronger model checking or uncertainty-aware components rather than optimizing only processing speed [24].

The workflow perspective also clarifies that interoperability is not only an end-stage delivery issue. In FE-conversion studies, the possibility of generating analysis-ready ground models depends on earlier workflow decisions regarding geometric consistency and property assignment [34]. In broader geological information modeling and exchange studies, model updateability and data exchange are treated as design requirements that influence how the workflow is structured from the start [35]. Conceptual engineering-geology discussions similarly stress that the practical value of 3D ground models depends on fitness for purpose, which in workflow terms means aligning the modeling sequence with the intended decisions and users rather than producing a generic 3D model [33].

To support the reader visually, Figure 3 provides an infographic that positions the six approach families along two operational axes that recur in the reviewed corpus, namely automation intensity and dependence on expert interpretation. This figure is not a ranking, but a comparative orientation tool that complements the taxonomy in Table 1.

 

Figure 3 – Workflow automation intensity and dependence on expert interpretation (positions are schematic and indicate dominant workflow tendencies, not absolute properties of each family)

 

Overall, the reviewed literature shows that data-to-model workflows are central to the performance, credibility, and scalability of 3D subsurface cartography. The workflow determines how heterogeneous evidence is transformed, how much expert judgement is embedded or formalized, how uncertainty can be preserved, and whether the resulting model can be updated and delivered for practical use. This is why the next section shifts from workflow mechanics to credibility, focusing on validation, uncertainty, and comparability, including the boundary-mismatch problem, which becomes critical when multiple 3D models are used together.

 

5. Credibility: validation, uncertainty, and comparability

 

The credibility of a 3D subsurface cartographic product depends on whether it can be defended as an evidence-based representation rather than a visually plausible reconstruction. Within the reviewed corpus, credibility is established through three interconnected elements: validation of predictive behavior against observations, explicit treatment of uncertainty, and comparability of outputs when models are reused, updated, or combined across adjacent domains. In workflow-driven property mapping, credibility is often established through the repeatable evaluation of interpolation behavior and the systematic reporting of model performance across the 3D extent [29]. In structural reconstruction, credibility is closely tied to the consistency of constraints and the ability of the model to honor interface and orientation information without producing implausible artifacts [9]. In geophysics-led modeling, credibility is strongly influenced by how calibration boreholes anchor interpretations of geophysical volumes, because geophysical patterns alone can be ambiguous without ground truth constraints [31].

Validation practices differ by target variable and representation. When the mapped variable is continuous, validation is frequently expressed as predictive performance of an estimator at withheld locations, which aligns naturally with cross-validation logic in geostatistical workflows [20]. When the target is categorical, such as lithology or stratigraphic class, the validation problem becomes one of classification reliability and boundary correctness rather than numeric error alone, which motivates segmentation or probabilistic labeling formulations that incorporate spatial context [21]. In multi-source structural modeling, validation extends beyond point-wise error and includes the internal consistency of the reconstructed framework under varying combinations of input sources, because different data types may constrain different aspects of the geometry [22]. A complementary approach to validation appears in benchmark-oriented work where synthetic geology is used to test how uncertainty grows from modeling choices under controlled conditions, which provides a structured way to assess methodological sensitivity [23].

Uncertainty treatment is a second pillar of credibility because 3D subsurface mapping necessarily extrapolates beyond direct observation. In EBK-based approaches, uncertainty is not only a post-processing annotation but an intrinsic output associated with the estimation process, which enables uncertainty-aware interpretation of property fields [20]. In predictive geostatistical frameworks for lithological modeling, uncertainty can be treated as a primary deliverable that complements the most likely class assignment and supports decision-making under incomplete knowledge [24]. In large-scale voxel workflows, uncertainty is often addressed indirectly through model generation logic and update procedures, which can influence perceived reliability even when explicit probabilistic volumes are not produced [17]. In national or regional ground model programs, uncertainty and credibility are also linked to the governance of data updates and model revisions because a map changing over time must retain traceable provenance of inputs and assumptions [13].

Comparability is the third pillar and becomes critical once 3D subsurface cartography moves from single-case demonstrations to operational use. Comparability includes reproducibility of results under reruns, consistency under updates, and coherence when neighboring models are combined. In urban-scale production, uneven borehole density and heterogeneous logging practices create spatially varying confidence, which means that comparability across a city is partly a quality-control problem rather than only a modeling problem [28]. In voxel models intended for web delivery, representation choices such as voxel splitting and hierarchical organization can improve usability but also introduce discretization-dependent boundary behaviors that affect how adjacent tiles or model blocks align [27]. In interoperability and delivery contexts, comparability is affected by the conversion process itself because translation to FE-ready form can modify geometry and property allocation, which requires explicit checks to avoid introducing artificial discontinuities [34].

The reviewed literature implies that mismatches along shared boundaries arise through several recurring mechanisms. One mechanism is inconsistent conceptual constraints across models, such as different stratigraphic rules or horizon definitions, which can lead to incompatible surfaces even when both models honor local data [9]. A second mechanism is discretization mismatch, where different voxel sizes, grid origins, or conversion rules produce seams that are numerical artifacts rather than geological features [27]. A third mechanism is heterogeneity of uncertainty across the boundary, where one side is well constrained by dense investigation and the other is not, causing one model to extrapolate with unwarranted confidence relative to its neighbor [28]. A fourth mechanism is data-fusion asymmetry, where one model uses geophysical constraints and the adjacent model does not, which can produce different continuity assumptions in areas with sparse boreholes [30]. These mechanisms are actionable because they can be translated into seam-focused checks and reporting practices that complement traditional validation.

Figure 4 summarizes the credibility logic synthesized from the reviewed corpus. The scheme highlights where validation and uncertainty should be inserted into the workflow and where seam comparability checks are necessary when multiple models are intended to be mosaicked or compared.

 

Figure 4 – Scheme of credibility building in 3D subsurface cartography

 

To support a compact comparison that can be used in the next section, Table 3 lists representative credibility features from selected studies. The intent is not to rank methods, but to show how validation, uncertainty, and seam comparability are treated across different approach families.

 

Table 3 – Representative credibility practices across the reviewed approach families

Paper

Primary credibility focus

Validation approach emphasized

Uncertainty product emphasized

Comparability implication

[20]

Estimator behavior and uncertainty

Cross-validation as estimator evaluation

Prediction uncertainty associated with EBK

Enables uncertainty-aware comparison between areas

[21]

Categorical model reliability

Spatially contextual labeling for classes

Probabilistic classification behavior

Supports boundary interpretation as uncertainty zones

[22]

Multi-source confidence assessment

Sensitivity to input source combinations

Explicit uncertainty assessment under multi-source inputs

Highlights risk of inconsistent constraints across neighbors

[23]

Method sensitivity under controlled truth

Synthetic benchmark testing

Quantified uncertainty driven by modeling choices

Provides basis for comparing methods on shared benchmarks

[28]

Operational quality control at urban scale

Handling uneven borehole distributions

Confidence varies with data density

Seams often reflect differing constraint strength across districts

[34]

Delivery credibility after conversion

Consistency during FE-ready conversion

Implicit uncertainty introduced by translation

Seams can be introduced by conversion rules if unchecked

 

Overall, credibility in 3D subsurface cartography emerges as an interplay between modeling choices and verification culture. Methods that provide explicit uncertainty products make it easier to communicate where the model is informative and where it is speculative, which strengthens interpretability for geotechnical ground characterization [24]. Workflows that scale to city or regional settings must treat data harmonization and quality control as credibility mechanisms because the model inherits biases and inconsistencies from the archive [18]. Approaches designed for practical delivery must include comparability checks at the interface between modeling and downstream platforms because translation can otherwise compromise the integrity of mapped layers and properties [35]. These findings motivate the translation of the reviewed evidence into applicability guidance and reporting requirements for decision-grade 3D subsurface cartographic products.

 

6. Applicability to geotechnical subsurface decision support

 

This section translates the reviewed evidence into practical guidance on when a 3D subsurface cartographic product can be treated as decision-grade for geotechnical ground characterization. In this review, applicability is defined as fitness for the purpose of the subsurface information product, meaning that the model must be consistent with the available evidence, transparent about uncertainty, and deliverable in a form that supports engineering use. [33] frames the practical value of 3D ground models in terms of suitability for the intended decisions rather than visual completeness, which motivates an applicability lens that evaluates products by their constraints, outputs, and limitations rather than by their geometric sophistication.

A first applicability condition concerns the dominant target of mapping. When the central goal is stratigraphic architecture, implicit structural modeling is often used because it is built around interfaces, orientations, and stratigraphic consistency constraints rather than cell-by-cell assignment [9]. When the central goal is multi-attribute volumes, voxel workflows are frequently chosen because they support volumetric querying, property statistics, and consistent grid-based extraction of information products [13]. When a project requires decision support under incomplete and heterogeneous evidence, approaches that treat uncertainty as a primary output become more directly applicable due they can communicate where the model is informative and where it is speculative [24].

A second applicability condition concerns spatial coverage and data regime. In regions where boreholes are sparse or uneven, geophysics-driven modeling can improve areal continuity by providing volumetric constraints that are then calibrated against boreholes [31]. In urban settings with dense but heterogeneous borehole archives, applicability depends strongly on whether the workflow includes systematic ingestion and quality control because uneven distribution can otherwise produce misleading extrapolation patterns [28]. In practical implementations based on open borehole logs, applicability increases when the workflow emphasizes consistent coding and pragmatic modeling choices that match the quality and granularity of available logs [18].

A third applicability condition is credibility, which is inseparable from validation and uncertainty communication. [20] shows that geostatistical estimation can provide uncertainty outputs together with predictions, which supports uncertainty-aware interpretation of continuous property fields. [21] demonstrates that categorical subsurface modeling benefits from probabilistic labeling ideas where spatial context is used to improve class assignment reliability, which is particularly relevant when the map product is a lithology volume. [23] argues that controlled benchmark settings using synthetic geology can reveal how modeling choices propagate into uncertainty, which supports method selection and reporting for decision-grade mapping.

A fourth applicability condition is comparability and integration. [27] highlights that performance-oriented voxel structuring can be essential for delivery and usability, but discretization choices can influence how seams and boundaries appear when models are tiled or merged. [34] shows that conversion to analysis-ready formats is not a neutral step because translation can alter geometry and property allocation if constraints are not preserved, which affects whether a subsurface model remains decision-grade after delivery. [35] emphasizes the importance of interoperable ground information structures and exchange logic, which becomes a practical applicability requirement when multiple stakeholders and platforms must reuse the same subsurface product.

Figure 5 provides a review-generated decision flow that operationalizes these conditions as a minimal applicability screening procedure. The scheme is designed to be applied before adopting a 3D subsurface cartographic product in a geotechnical decision-support context.

 

Figure 5 – Applicability screening flow for 3D subsurface cartographic products

 

To make applicability guidance concrete, Table 4 summarizes representative use-case patterns that recur in the reviewed corpus. Each row is grounded in one paper that exemplifies the decision context and the associated workflow implications.

 

Table 4 – Use-case patterns for decision-grade 3D subsurface mapping

Decision-support need

Dominant data regime

Recommended approach family

Minimum credibility evidence expected

Representative source

Coherent stratigraphic architecture for site-scale interpretation

Interfaces, orientations, stratigraphic constraints, limited property targets

Implicit or potential-field structural modeling

Plausibility and constraint-consistency checks that demonstrate honored inputs

[9]

Multi-attribute volumetric subsurface map for querying and statistics

Boreholes and logs with multiple attributes in a consistent coding scheme

Voxel models and voxel workflows

Demonstrated robustness of voxel assignment rules and traceable update logic

[13]

Urban-scale production from large heterogeneous borehole archives

Dense but uneven and heterogeneous borehole distribution

Automation and urban-scale pipelines

Quality-control logic that accounts for data-density contrasts and prevents overconfident extrapolation

[28]

Extending spatial coverage beyond borehole-only constraints

Geophysical models with calibration boreholes

Geophysics-driven 3D modeling

Explicit calibration logic tying geophysics interpretation to borehole control

[31]

Decision-support where uncertainty must be communicated explicitly

Mixed constraints with incomplete coverage

Uncertainty-focused geostatistical framework

Uncertainty products paired with predictions that can be interpreted as confidence indicators

[24]

Analysis-ready subsurface product for downstream numerical modeling

Existing model plus requirement for conversion to FE-ready form

Interoperability and engineering delivery

Verification that conversion preserves geometry and property allocation constraints

[34]

 

Across the corpus, the strongest determinant of applicability is not the label of the method family but whether the produced 3D map satisfies a small set of decision-grade conditions. [10] shows that structurally complex cases demand careful constraint management because even powerful modeling tools can produce misleading continuity if structural rules are not enforced. [17] indicates that voxel models used for planning and management benefit when uncertainty and model revision behavior are addressed as operational issues rather than treated as a one-time model build. [29] demonstrates that a reproducible GIS workflow strengthens applicability by making model generation repeatable under new parameter settings or updated datasets, which is important when the subsurface map is treated as a living product.

Based on these patterns, the following applicability checklist can be used as a compact reporting and adoption guide, and it will be expanded into a formal reporting checklist in the final synthesis. [32] illustrates that for large-area ground model applications, explicit documentation of data sources, coverage limitations, and integration logic is essential because spatial completeness can otherwise be mistaken for evidential completeness. [22] shows that in multi-source contexts, applicability improves when uncertainty contributions from different sources are preserved and compared rather than merged without traceability.

Checklist items to report or verify before treating a 3D subsurface map as decision-grade for geotechnical ground characterization include a stated target variable and decision context, an explicit description of the data regime and coverage limitations, a documented representation and inference choice with key settings, validation evidence appropriate to the target variable, uncertainty communication appropriate to the use-case, and integration details covering delivery formats, update procedure, and seam comparability expectations where models will be mosaicked. [33] supports this orientation by emphasizing that the operational value of a 3D ground model follows from its alignment with the intended engineering use, which in this review is formalized as applicability conditions.

 

7. Conclusions

 

This review synthesized the current state of 3D subsurface cartography for geotechnical purposes by organizing the selected literature into a taxonomy aligned with the Litmaps-based clustering and by comparing how different approach families transform subsurface observations into 3D stratigraphic and or property products. The analysis indicates that there is no single dominant solution. Instead, complementary families prioritize different outcomes, including structural consistency of stratigraphy, volumetric querying of lithology and properties, explicit uncertainty communication, operational automation at urban scale, geophysics-led extension of spatial coverage, and interoperability for engineering delivery.

A key finding is that workflow design is as important as the final representation. Decision-grade mapping requires explicit data harmonization, traceable assumptions, and repeatable processing rather than only visually complete models. Credibility depends on validation logic appropriate to the mapped product and on uncertainty communication that clarifies where the model is constrained by evidence and where it is extrapolative.

Comparability becomes critical when models are reused, updated, or combined across adjacent domains. Boundary mismatches can arise from inconsistent conceptual rules, discretization differences, unequal data density, and asymmetric auxiliary constraints. For this reason, seam checks and remediation strategies should be treated as standard workflow elements when mosaicking or cross-area comparison is expected.

This review is subject to several limitations. Because the literature corpus was assembled through a Litmaps citation-network workflow, the selection may favor well-connected and more frequently cited studies, while very recent papers, less-cited contributions, regional publications, and non-indexed sources may be underrepresented. In addition, although the coding and tagging procedure was applied consistently, the assignment of papers to approach families involved author judgement. Accordingly, the proposed taxonomy should be interpreted as a transparent synthesis of the selected corpus rather than as an exhaustive or uniquely fixed classification of the entire field.

Overall, the reviewed evidence supports an applicability view where method selection should be guided by the mapping objective and data regime, then verified against minimum credibility and integration requirements. Future work should prioritize workflow-level uncertainty propagation, more standardized evaluation practices, improved seam-consistency methods, and stronger interoperability across GIS, web visualization, databases, and engineering delivery environments.

 

Acknowledgments

This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP26198009).

 

References

[1]   K.-K. Phoon and F. H. Kulhawy, “Characterization of geotechnical variability,” Can. Geotech. J., vol. 36, no. 4, pp. 612–624, Nov. 1999, doi: 10.1139/t99-038.

[2]   H. Ali and J. Choi, “A Review of Underground Pipeline Leakage and Sinkhole Monitoring Methods Based on Wireless Sensor Networking,” Sustainability, vol. 11, no. 15, p. 4007, Jul. 2019, doi: 10.3390/su11154007.

[3]   K.-K. Phoon et al., “Geotechnical uncertainty, modeling, and decision making,” Soils and Foundations, vol. 62, no. 5, p. 101189, Oct. 2022, doi: 10.1016/j.sandf.2022.101189.

[4]   M. G. Culshaw, “From concept towards reality: developing the attributed 3D geological model of the shallow subsurface,” QJEGH, vol. 38, no. 3, pp. 231–284, Aug. 2005, doi: 10.1144/1470-9236/04-072.

[5]   R. A. Bowden, “Building confidence in geological models,” SP, vol. 239, no. 1, pp. 157–173, Jan. 2004, doi: 10.1144/GSL.SP.2004.239.01.11.

[6]   W. Hou et al., “Assessing quality of urban underground spaces by coupling 3D geological models: The case study of Foshan city, South China,” Computers & Geosciences, vol. 89, pp. 1–11, Apr. 2016, doi: 10.1016/j.cageo.2015.07.016.

[7]   M. J. Page et al., “The PRISMA 2020 statement: an updated guideline for reporting systematic reviews,” BMJ, p. n71, Mar. 2021, doi: 10.1136/bmj.n71.

[8]   “Litmaps – About.” Accessed: Oct. 16, 2025. [Online]. Available: https://www.litmaps.com/about/us

[9]   P. Calcagno, J. P. Chilès, G. Courrioux, and A. Guillen, “Geological modelling from field data and geological knowledge,” Physics of the Earth and Planetary Interiors, vol. 171, no. 1–4, pp. 147–157, Dec. 2008, doi: 10.1016/j.pepi.2008.06.013.

[10] J. M. Thornton, G. Mariethoz, and P. Brunner, “A 3D geological model of a structurally complex Alpine region as a basis for interdisciplinary research,” Sci Data, vol. 5, no. 1, p. 180238, Oct. 2018, doi: 10.1038/sdata.2018.238.

[11] J. Guo et al., “Three-dimensional geological modeling and spatial analysis from geotechnical borehole data using an implicit surface and marching tetrahedra algorithm,” Engineering Geology, vol. 284, p. 106047, Apr. 2021, doi: 10.1016/j.enggeo.2021.106047.

[12] P. Liu, Z. Li, G. Yu, and Z. Li, “Three-Dimensional Geological Modeling Method Based on Potential Vector Fields,” Applied Sciences, vol. 15, no. 7, p. 3594, Mar. 2025, doi: 10.3390/app15073594.

[13] J. Stafleu, D. Maljers, J. L. Gunnink, A. Menkovic, and F. S. Busschers, “3D modelling of the shallow subsurface of Zeeland, the Netherlands,” Netherlands Journal of Geosciences, vol. 90, no. 4, pp. 293–310, Dec. 2011, doi: 10.1017/S0016774600000597.

[14] Hademenos, V., Van Lancker, V., Missiaen, T., and Stafleu, J., “Preliminary results on the 3D voxel model of the subsurface of the Belgian Continental Shelf,” in Book of abstracts – VLIZ Young Scientists’ Day, Brugge, Belgium: VLIZ Special Publication, 71, 2015, p. 69.

[15] D. Maljers, J. Stafleu, M. J. Van Der Meulen, and R. M. Dambrink, “Advances in constructing regional geological voxel models, illustrated by their application in aggregate resource assessments,” Netherlands Journal of Geosciences, vol. 94, no. 3, pp. 257–270, Sep. 2015, doi: 10.1017/njg.2014.46.

[16] A. Graciano, A. J. Rueda, and F. R. Feito, “Real-time visualization of 3D terrains and subsurface geological structures,” Advances in Engineering Software, vol. 115, pp. 314–326, Jan. 2018, doi: 10.1016/j.advengsoft.2017.10.002.

[17] V. Hademenos, J. Stafleu, T. Missiaen, L. Kint, and V. R. M. Van Lancker, “3D subsurface characterisation of the Belgian Continental Shelf: a new voxel modelling approach,” Netherlands Journal of Geosciences, vol. 98, p. e1, 2019, doi: 10.1017/njg.2018.18.

[18] S. Nonogaki, S. Masumoto, T. Nemoto, and T. Nakazawa, “Voxel modeling of geotechnical characteristics in an urban area by natural neighbor interpolation using a large number of borehole logs,” Earth Sci Inform, vol. 14, no. 2, pp. 871–882, Jun. 2021, doi: 10.1007/s12145-021-00600-x.

[19] K. Micić, H.-G. Bui, and J. Ninić, “Computer-aided ground modelling incorporating soil variability for geotechnical applications,” in Proceedings of the Southeastern Europe Tunnelling Conference (SETC-2025), Belgrade, Serbia: Serbian Association for Tunnels and Underground Structures (ITA Serbia), 2025, pp. 283–293. doi: 10.5937/SETC25026M.

[20] K. Krivoruchko, “Empirical Bayesian Kriging Implemented in ArcGIS Geostatistical Analyst,” ArcUser, vol. 15, no. 4, pp. 6–10, 2012.

[21] H. Wang, J. F. Wellmann, Z. Li, X. Wang, and R. Y. Liang, “A Segmentation Approach for Stochastic Geological Modeling Using Hidden Markov Random Fields,” Math Geosci, vol. 49, no. 2, pp. 145–177, Feb. 2017, doi: 10.1007/s11004-016-9663-9.

[22] D. Liang, W. Hua, X. Liu, Y. Zhao, and Z. Liu, “Uncertainty assessment of a 3D geological model by integrating data errors, spatial variations and cognition bias,” Earth Sci Inform, vol. 14, no. 1, pp. 161–178, Mar. 2021, doi: 10.1007/s12145-020-00548-4.

[23] S.-J. Wang, Q. C. Nguyen, Y.-C. Lu, Y. G. Doyoro, and D.-H. Tran, “Evaluation of geological model uncertainty caused by data sufficiency using groundwater flow and land subsidence modeling as example,” Bull Eng Geol Environ, vol. 81, no. 8, p. 331, Aug. 2022, doi: 10.1007/s10064-022-02832-7.

[24] A. Abdelsattar and E. E.-D. Hemdan, “A Geostatistical Predictive Framework for 3D Lithological Modeling of Heterogeneous Subsurface Systems Using Empirical Bayesian Kriging 3D (EBK3D) and GIS,” Geomatics, vol. 5, no. 4, p. 60, Oct. 2025, doi: 10.3390/geomatics5040060.

[25] J. Gou, W. Zhou, and L. Wu, “IMPLICIT THREE-DIMENSIONAL GEO-MODELLING BASED ON HRBF SURFACE,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., vol. XLII-2/W2, pp. 63–66, Oct. 2016, doi: 10.5194/isprs-archives-XLII-2-W2-63-2016.

[26] J. Guo, L. Wu, W. Zhou, J. Jiang, and C. Li, “Towards Automatic and Topologically Consistent 3D Regional Geological Modeling from Boundaries and Attitudes,” IJGI, vol. 5, no. 2, p. 17, Feb. 2016, doi: 10.3390/ijgi5020017.

[27] J. Li, P. Liu, X. Wang, H. Cui, and Y. Ma, “3D geological implicit modeling method of regular voxel splitting based on layered interpolation data,” Sci Rep, vol. 12, no. 1, p. 13840, Aug. 2022, doi: 10.1038/s41598-022-17231-x.

[28] X. Wang et al., “Towards automatic and rapid 3D geological modelling of urban sedimentary strata from a large amount of borehole data using a parallel solution of implicit equations,” Earth Sci Inform, vol. 17, no. 1, pp. 421–440, Feb. 2024, doi: 10.1007/s12145-023-01164-8.

[29] Y. Utepov, A. Aldungarova, A. Mukhamejanova, T. Awwad, S. Karaulov, and I. Makasheva, “Voxel Interpolation of Geotechnical Properties and Soil Classification Based on Empirical Bayesian Kriging and Best-Fit Convergence Function,” Buildings, vol. 15, no. 14, p. 2452, Jul. 2025, doi: 10.3390/buildings15142452.

[30] F. Jørgensen, R. Rønde Møller, P. B. E. Sandersen, and L. Nebel, “3-D geological modelling of the Egebjerg area, Denmark, based on hydrogeophysical data,” GEUS Bulletin, vol. 20, pp. 27–30, Jul. 2010, doi: 10.34194/geusb.v20.4892.

[31] F. Jørgensen, R. R. Møller, L. Nebel, N.-P. Jensen, A. V. Christiansen, and P. B. E. Sandersen, “A method for cognitive 3D geological voxel modelling of AEM data,” Bull Eng Geol Environ, vol. 72, no. 3–4, pp. 421–432, Dec. 2013, doi: 10.1007/s10064-013-0487-2.

[32] X. Peng et al., “Development of a 3D Ground Model for an Offshore Wind Farm with Complex Interlayering of Silty Soils,” in Proceedings of the 7th International Conference on Geotechnical and Geophysical Site Characterization, Barcelona, Spain: CIMNE, 2024. doi: 10.23967/isc.2024.301.

[33] R. Hack, B. Orlic, S. Ozmutlu, S. Zhu, and N. Rengers, “Three and more dimensional modelling in geo-engineering,” Bull Eng Geol Environ, vol. 65, no. 2, pp. 143–153, May 2006, doi: 10.1007/s10064-005-0021-2.

[34] B. Wang and S. Bauer, “Converting heterogeneous complex geological models to consistent finite element models: methods, development, and application to deep geothermal reservoir operation,” Environ Earth Sci, vol. 75, no. 20, p. 1349, Oct. 2016, doi: 10.1007/s12665-016-6138-8.

[35] M. S. Khan, I. S. Kim, and J. Seo, “A boundary and voxel-based 3D geological data management system leveraging BIM and GIS,” International Journal of Applied Earth Observation and Geoinformation, vol. 118, p. 103277, Apr. 2023, doi: 10.1016/j.jag.2023.103277.

 

 

Information about authors:

Nurgul Alibekova – PhD, Associate Professor; 1) Scientific Supervisor, Solid Research Group, LLP, Astana, Kazakhstan; 2) Associate Professor, Department of Civil Engineering, L.N. Gumilyov Eurasian National University, Astana, Kazakhstan; nt_alibekova@mail.ru

Aleksej Aniskin – Candidate of Technical Sciences, Associate Professor, Department of Civil Engineering, University North, Varaždin, Croatia, aaniskin@unin.hr

Kairat Mukhambetkaliev – Candidate of Technical Sciences, Leading Researcher, JSC “Kazakhstan Road Research Institute”, Astana, Kazakhstan, k.mukhambetkaliyev@qazjolgzi.kz

Indira Makasheva – MSc; 1) Junior Researcher, Solid Research Group, LLP, Astana, Kazakhstan; 2) PhD Student, Department of Civil Engineering, L.N. Gumilyov Eurasian National University, Astana, Kazakhstan; indira.yergaliyeva@gmail.com

 

Author Contributions:

Nurgul Alibekova – concept, methodology, resources, interpretation.

Aleksej Aniskin – methodology, editing.

Kairat Mukhambetkaliev – analysis, funding acquisition.

Indira Makasheva – data collection, analysis, visualization, drafting.

 

Conflict of Interest: The authors declare no conflict of interest.

 

Use of Artificial Intelligence (AI): Litmaps was used to assist with identifying and organizing relevant publications, and Grammarly was used to improve language and clarity. All methodological decisions, interpretations, and final wording were verified and approved by all the authors.

 

Received: 19.12.2025

Revised: 18.03.2026

Accepted: 19.03.2026

Published: 22.03.2026

 

Copyright: @ 2026 by the authors. Licensee Technobius, LLP, Astana, Republic of Kazakhstan. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY-NC 4.0) license (https://creativecommons.org/licenses/by-nc/4.0/).