TKC: Potential Fields


In this tutorial, we demonstrate how to invert magnetic field data in 3D using SimPEG.PF. It simulates a geophysical experiment over a synthetic kimberlite pipe model. The chosen geological model intends to replicate the Tli Kwi Cho kimberlite deposit, NWT. Using SimPEG Directives, we implement a sparse regularization function and recover both a smooth and a compact susceptibility model, which can used to infer geological information at depth. The same example was presented as a poster at Scipy 2016.


Fig. 7 Scipy_2016_PF_Thumbnail


We begin this story with some physics background. We need to establish the connection between rocks magnetic properties and the associated geophysical experiment. Maxwell’s equations for a static electric field and in the absence of free-currents can be written as:

\[\nabla \cdot \mathbf{B} = 0 ,\;\ \nabla \times \mathbf{H} = 0\]

where \(\mathbf{B}\) and \(\mathbf{H}\) correspond to the magnetic flux density and magnetic field respectively. Both quantities are related by:

\[\begin{split}\mathbf{B} = \mu \mathbf{H} \\ \mu = \mu_0 ( 1 + \kappa )\end{split}\]

where \(\mu\) is the magnetic permeability. In free-space, both \(\mathbf{B}\) and \(\mathbf{H}\) are linearly related by the magnetic permealitity of free-space \(\mu_0\). In matter however, the magnetic flux can be increased proportionally on how easily magnetic material gets polarized, quantified by the magnetic susceptibility \(\kappa\). In a macroscopic point of view, the magnetic property of matter are generally described in terms of magnetization per unit volume such that:

\[\mathbf{M} = \kappa \mathbf{H_s + H_0} + \mathbf{M_r}\]

where \(\mathbf{M}\) can be oriented in any specific direction due to secondary local fields (\(\mathbf{H_s}\)) and/or due to permanent dipole moments (\(\mathbf{M_r}\)). For simplicity we will here assume a purely induced response due to the Earth’s \(\mathbf{H_0}\). Using a few vector identities, we can re-write the magnetic field due to magnetized material in terms of a scalar potential:

\[\phi = \frac{1}{4\pi} \int_{V} \nabla \left(\frac{1}{r}\right) \cdot \mathbf{H}_0 \kappa \; dV\]

where \(r\) defines the relative position between an observer and the magnetic source. Taking the divergence of this potential yields:

\[\mathbf{b} = \frac{\mu_0}{4\pi} \int_{V} \nabla \nabla \left(\frac{1}{r}\right) \cdot \mathbf{H}_0 \kappa \; dV\]

Great, we have a general expression relating any secondary magnetic flux due to magnetic material

Forward Problem

Assuming a purely induced response, we can solve the integral analytically. As derived by Sharma (1966), the integral can be evaluated for rectangular prisms such that:

\[\mathbf{b} = \mathbf{T} \cdot \mathbf{H}_0 \; \kappa\]

Where the tensor matrix \(\bf{T}\) relates the vector magnetization \(\mathbf{M}\) inside a single cell to the components of the field \(\mathbf{b}\) observed at a given location:

\[\begin{split}\mathbf{T} = \begin{pmatrix} T_{xx} & T_{xy} & T_{xz} \\ T_{yx} & T_{yy} & T_{yz} \\ T_{zx} & T_{zy} & T_{zz} \end{pmatrix}\end{split}\]

In general, we discretize the earth into a collection of cells, each contributing to the magnetic data such that giving rise to a large and dense linear system of the form:

\[\mathbf{b} = \sum_{j=1}^{nc} \mathbf{T}_j \cdot \mathbf{H}_0 \; \kappa_j\]

In most geophysical surveys, we are not collecting all three components, but rather the magnitude of the field, or Total Magnetic Intensity (TMI) data. Because the inducing field is really large, we will assume that the anomalous fields are parallel to \(H_0\):

\[d^{TMI} = \mathbf{\hat H}_0 \cdot \mathbf{b}\]

We then end up with a much smaller system:

\[d^{TMI} = \mathbf{F}\; \boldsymbol{\kappa}\]

where \(\mathbf{F} \in \mathbb{R}^{nd \times nc}\) is our forward operator and \(\kappa\) is the physical property describing the Earth.

Getting started

In order to define a geophysical experiment we need set several important parameters, such as a mesh, data location, inversion parameters and so on. While we could set all of those parameters manually, SimPEG.PF gives the option to work with an input file, capturing all the necessary information to run the inversion. In preparation for this synthetic example, we put together all necessary files and added them to a working directory. The input file can then be loaded and easily accessed through the Driver class:

In [1]:
# Start by importing SimPEG, the potential fields package and
import os
import numpy as np
import matplotlib.pyplot as plt

from SimPEG import Mesh
from SimPEG import Utils
from SimPEG import Maps
from SimPEG import Regularization
from SimPEG import DataMisfit
from SimPEG import Optimization
from SimPEG import InvProblem
from SimPEG import Directives
from SimPEG import Inversion
from SimPEG import PF
import SimPEG.PF as PF
from SimPEG.Utils.io_utils import download

%matplotlib inline
In [2]:
# Download files from the remote repository

# This is where the files are stored on the cloud
url = ''
cloudfiles = ['MagData.obs', 'Mesh.msh', 'Initm.sus', 'SimPEG_PF_Input.inp']

downloads = download([url+f for f in cloudfiles], folder='./PF_over_TKC/', overwrite=True)
downloads = dict(zip(cloudfiles, downloads))

driver = PF.MagneticsDriver.MagneticsDriver_Inv(downloads['SimPEG_PF_Input.inp'])
overwriting MagData.obs
overwriting Mesh.msh
overwriting Initm.sus
overwriting SimPEG_PF_Input.inp
Download completed!
In [3]:
# Objects loaded from the input file are then accessible like this
mesh = driver.mesh
initm = driver.m0.copy()

The input file looks like this:

====    ==============   ===================================================================================
Line    Input            Description
====    ==============   ===================================================================================
1       Mesh.msh         Mesh file*
2       Data.obs         Data file*
3       VALUE -100       Topography file* | null (all included)
4       FILE Initm.mod   Starting model* | VALUE ##
5       VALUE 0          Reference model* | VALUE ##
6       DEFAULT          Magnetization file* | DEFAULT
7       DEFAULT          Cell weight file* | DEFAULT
8       DEFAULT          Target Chi factor VALUE | DEFAULT (1)
9       DEFAULT          Scaling parameters for regularization (alpha_s, alpha_x, alpha_y, alpha_z)
10      VALUE 0 1        Lower and upper bound values
11      VALUE 0 1 1 1    Lp-norms applied on model and model gradients (p,q_x,q_y,q_z)
12      DEFAULT          Treshold parameter for the norms (\epsilon_p,\epsilon_q) | DEFAULT
Note                     * UBC file format
====    ==============   ===================================================================================

We will use each elements later, but for now, this how the inversion is initiated.

Model and Mapping

Since we have already loaded the model in a rectangular mesh, we can plot it with SimPEG’s built-in functions.

In [4]:
initm[initm==-100] = np.nan

# Create a figure and plot sections
fig, ax1 = plt.figure(), plt.subplot(1,2,1)
mesh.plotSlice(initm, ax = ax1, normal='Z', ind=18, clim = (0,0.02), pcolorOpts={'cmap':'viridis'})
plt.title('Z: '+str(mesh.vectorCCz[18]) + " m")

ax2 = plt.subplot(1,2,2)
mesh.plotSlice(initm, ax = ax2, normal='Y', ind=16, clim = (0,0.02), pcolorOpts={'cmap':'viridis'})
plt.title('Y: '+str(mesh.vectorCCy[16])+' m')

Notice that some of the cells in the model are air and show as white. The code will detected the air cells from the VALUE specified on line 3 of the input file. These cells are ignored by the code. Alternatively, the user can input a topography file or an active model specifying the status of each cells (0:inactive, 1:active-dynamic, -1:active-static).


Great, now that we have a mesh and a model, we only need to specify a survey (i.e. where is the data). Once again, an observation file is provided, as specified on Line 2 of the input file. We can now forward model some magnetic data above the synthetic kimberlite.

In [5]:
# Get the survey
survey = driver.survey

# Get the active cells (below topography)
actv = driver.activeCells

# Create mapping to come back from the reduce space later
idenMap = Maps.IdentityMap(nP=len(actv))

# Now that we have a model and a survey we can build the linear system ...
# (use the argument forwardOnly=True to avoid storing the dense forward matrix)
prob = PF.Magnetics.MagneticIntegral(mesh, chiMap=idenMap, actInd=actv, rtype = 'tmi')

# Pair the survey and problem (data and model space)
In [6]:
# Forward operators and data are calculated here (wait for it!)
d = prob.fields(driver.m0)

# Add noise to the data and assign uncertainties
survey.dobs = d + np.random.randn(len(d)) # We add some random Gaussian noise (1 nT)
survey.std = np.ones(len(d))*1. # Assign flat uncertainties (1 nT)
Begin calculation of forward operator: ind
Done 0.0 %
Done 10.0 %
Done 20.0 %
Done 30.0 %
Done 40.0 %
Done 50.0 %
Done 60.0 %
Done 70.0 %
Done 80.0 %
Done 90.0 %
Done 100% ...forward operator completed!!

In [7]:
# Then we can plot with the build-in function
PF.Magnetics.plot_obs_2D(survey.srcField.rxList[0].locs,d=survey.dobs ,title='Mag Obs')

Inverse Problem

We have generated synthetic data, we now what to see if we can solve the inverse problem. Using the usual formulation, we seek a model that can reproduce the data, let’s say a least-squares measure of the form:

\[\phi_d =\|\mathbf{W}_d \left( \mathbf{F}\;\mathbf{m} - \mathbf{d}^{obs} \right)\|_2^2\;,\]

where \(\mathbf{W}_d\) are estimated data uncertainties The inverse problem is hard because we don’t have great data coverage, and the Earth is big, and there is usually noise in the data. So we need to add something to regularize it. The simplest way to do it is to penalize solutions that won’t make sense geologically, for example to assume that the model is small and smooth. Most inversion codes use the l2-norm measure such that:

\[\phi_m = {\| \mathbf{W}_s \;( \mathbf{m - m^{ref}})\|}^2_2 + \sum_{i = x,y,z} {\| \mathbf{W}_i \; \mathbf{G}_i \; \mathbf{m}\|}^2_2\]

where \(m^{ref}\) is any a priori knowledge that we might have about the solution and \(\mathbf{G}_x, \mathbf{G}_y, \mathbf{G}_z\) are finite difference operators measuring the model spatial gradients along orthogonal directions. In a purely unconstrained case, \(m^{ref}\) is usually equal to some background value (i.e. zero susceptibility). The full objective function to be minimized can be written as:

\[\phi (m) = \phi_d + \beta \phi_m\]

which will yield our usual function that minimize the data error and model structure. The trade-off parameter \(\beta\) is adjusted in order to get a good balance between data misfit and model

We propose a fancier regularization function that can allow to recover sparse and blocky solutions. Starting with the well known Ekblom norm:

\[\phi_m = \sum_{i=1}^{nc} {(x_i^2 + \epsilon^2)}^{p/2}\]

where \(x_i\) denotes some function of the model parameter, and \(\epsilon\) is a small value to avoid singularity as \(m\rightarrow0\).

For p=2, we get the usual least-squares measure and we recover the regularization presented above. For \(p \leq 1\), the function becomes non-linear which requires some tweaking. We can linearize the function by updating the penality function iteratively, commonly known as an Iterative Re- weighted Least-Squares (IRLS) method. The regularization function becomes:

\[\phi_m^{(k)} = \frac{1}{2}\sum_{i=1}^{nc} r_i \; x_i^2\]

where we added the superscript \(\square^{(k)}\) to denote the IRLS iterations. The weights \(r(x)\) are computed from model values obtained at a previous iteration such that:

\[{r}_i ={\Big( {({x_i}^{(k-1)})}^{2} + \epsilon^2 \Big)}^{p/2 - 1}\]

where \({r}(x) \in \mathbb{R}^{nc}\).

In matrix form, our objective function simply becomes:

\[\phi(m) = \|\mathbf{W}_d \left( \mathbf{F}\;\mathbf{m} - \mathbf{d}^{obs} \right)\|_2^2 + \beta \Big [ {\| \mathbf{W}_s \;\mathbf{R}_s\;( \mathbf{m - m^{ref}})\|}^2_2 + \sum_{i = x,y,z} {\| \mathbf{W}_i\; \mathbf{R}_i \; \mathbf{G}_i \; \mathbf{m}\|}^2_2 \Big ]\]

where the IRLS weights \(\mathbf{R}_s\) and \(\mathbf{R}_i\) are diagonal matrices defined as:

\[{R}_{s_{jj}} = \sqrt{\eta_p}{\Big[ {({m_j}^{(k-1)})}^{2} + \epsilon_p^2 \Big]}^{(p/2 - 1)/2}\]
\[{R}_{i_{jj}} = \sqrt{\eta_q}{\Big[ {\left ({{(G_i\;m^{(k-1)})}_j }\right)}^{2} + \epsilon_q^2 \Big]}^{(q/2 - 1)/2}\]
\[\eta_p = {\epsilon_p}^{(1-p/2)}\;,\]
\[\eta_q = {\epsilon_q}^{(1-q/2)}\]

we added two scaling parameters \(\eta_p\) and \(\eta_q\) for reasons that we won’t dicuss here, but turn out to be important to get stable solves.

In order to initialize the IRLS and get an estimate for the stabilizing parameters \(\epsilon_p\) and \(\epsilon_q\), we first invert with the smooth \(l_2\)-norm. Once the target data misfit has been achieved, the inversion switches to the sparse regularization. This way we get a good starting point, hopefully close enough to the true solution. The whole IRLS process is implemented with a directive added to the inversion workflow.

In [8]:
# It is potential fields, so we will need to push the inverison down
# Create distance weights from our linera forward operator
wr = np.sum(prob.G**2.,axis=0)**0.5
wr = ( wr/np.max(wr) )


Here is where the norms are applied

Use pick a treshold parameter empirically based on the distribution of model parameters (run last cell to see the histogram before and after IRLS)

In [9]:
reg = Regularization.Sparse(mesh, indActive=actv, mapping=idenMap)
reg.cell_weights = wr
reg.eps_p = 3e-4
reg.eps_q = 3e-4

Misfit Function

In [10]:
dmis = DataMisfit.l2_DataMisfit(survey)
dmis.W = 1/survey.std
SimPEG.DataMisfit.l2_DataMisfit assigning default eps of 1e-5 * ||dobs||


We solve the inverse problem with a projected Gauss-Newton solver

In [11]:
opt = Optimization.ProjectedGNCG(maxIter=100 ,lower=0.,upper=1., maxIterLS = 20, maxIterCG= 10, tolCG = 1e-3)

Inverse Problem

Put all the components together

In [12]:
invProb = InvProblem.BaseInvProblem(dmis, reg, opt)


In [13]:
# We add a few directives
# First to guess the initial beta
betaest = Directives.BetaEstimate_ByEig()

# Second, we add a pre-conditioner to speedup the CG solves
update_Jacobi = Directives.Update_lin_PreCond()

# This is the directive that will orchestrate the IRLS steps
IRLS = Directives.Update_IRLS(f_min_change = 1e-4, minGNiter=3)

# We add the directives to the inverse problem
inv = Inversion.BaseInversion(invProb, directiveList=[betaest,IRLS,update_Jacobi])


Finally, we run inversion...

In [14]:
m0 = np.ones(len(actv))*1e-4
mrec =

# Create active map to return to full space after the inversion
actvMap = Maps.InjectActiveCells(mesh, actv, -100)

# Map to full space the final model and l2 model
m_lp = actvMap*mrec
m_l2 = actvMap*IRLS.l2model

# # Once it is done, we can save the models (l2 and lp) to a file
# Mesh.TensorMesh.writeModelUBC(mesh, 'SimPEG_MAG_l2l2.sus',m_l2)
# Mesh.TensorMesh.writeModelUBC(mesh, 'SimPEG_MAG_lplq.sus',m_lp)
SimPEG.InvProblem will set Regularization.mref to m0.

    SimPEG.InvProblem is setting bfgsH0 to the inverse of the eval2Deriv.
    ***Done using same Solver and solverOpts as the problem***
model has any nan: 0
=============================== Projected GNCG ===============================
  #     beta     phi_d     phi_m       f      |proj(x-g)-x|  LS    Comment
x0 has any nan: 0
   0  1.41e+09  2.86e+05  0.00e+00  2.86e+05    1.45e+02      0
   1  7.05e+08  2.42e+05  1.43e-05  2.53e+05    1.44e+02      0
   2  3.53e+08  2.11e+05  4.74e-05  2.28e+05    1.43e+02      0   Skip BFGS
   3  1.76e+08  1.66e+05  1.40e-04  1.91e+05    1.42e+02      0   Skip BFGS
   4  8.81e+07  1.16e+05  3.47e-04  1.46e+05    1.40e+02      0   Skip BFGS
   5  4.41e+07  7.11e+04  7.08e-04  1.02e+05    1.37e+02      0   Skip BFGS
   6  2.20e+07  3.98e+04  1.21e-03  6.64e+04    1.33e+02      0   Skip BFGS
   7  1.10e+07  2.08e+04  1.81e-03  4.08e+04    1.28e+02      0   Skip BFGS
   8  5.51e+06  1.00e+04  2.48e-03  2.37e+04    1.24e+02      0   Skip BFGS
   9  2.75e+06  4.37e+03  3.18e-03  1.31e+04    1.22e+02      0   Skip BFGS
  10  1.38e+06  1.75e+03  3.82e-03  7.01e+03    1.20e+02      0   Skip BFGS
  11  6.89e+05  6.64e+02  4.34e-03  3.65e+03    1.19e+02      0   Skip BFGS
  12  3.44e+05  2.52e+02  4.74e-03  1.88e+03    1.17e+02      0   Skip BFGS
  13  1.72e+05  9.45e+01  5.04e-03  9.62e+02    1.13e+02      0   Skip BFGS
Convergence with smooth l2-norm regularization: Start IRLS steps...
L[p qx qy qz]-norm : [ 0.  1.  1.  1.  1.]
eps_p: 0.0003 eps_q: 0.0003
Regularization decrease: 2.508e+01
  14  1.72e+05  3.73e+01  5.25e-03  9.41e+02    1.28e+02      0   Skip BFGS
  15  1.72e+05  6.90e+01  3.40e-03  6.54e+02    8.92e+01      0
  16  1.72e+05  4.30e+01  3.20e-03  5.94e+02    8.52e+01      0
Regularization decrease: 2.685e-01
  17  3.50e+05  4.15e+01  3.20e-03  1.16e+03    1.33e+02      0
  18  3.50e+05  8.86e+01  2.02e-03  7.97e+02    1.00e+02      0
  19  3.50e+05  7.49e+01  1.98e-03  7.69e+02    1.20e+02      0
Regularization decrease: 2.794e-01
  20  3.71e+05  7.97e+01  1.98e-03  8.16e+02    9.29e+01      0
  21  3.71e+05  9.69e+01  1.51e-03  6.59e+02    9.05e+01      0
  22  3.71e+05  9.27e+01  1.50e-03  6.48e+02    1.17e+02      0
Regularization decrease: 1.797e-01
  23  3.31e+05  9.49e+01  1.50e-03  5.90e+02    7.22e+01      0
  24  3.31e+05  8.56e+01  1.29e-03  5.12e+02    1.21e+02      0
  25  3.31e+05  8.81e+01  1.27e-03  5.08e+02    1.02e+02      0
Regularization decrease: 1.204e-01
  26  3.31e+05  8.72e+01  1.27e-03  5.07e+02    7.77e+01      0
  27  3.31e+05  8.47e+01  1.13e-03  4.58e+02    9.30e+01      0
  28  3.31e+05  8.41e+01  1.12e-03  4.56e+02    9.84e+01      0
Regularization decrease: 9.629e-02
  29  3.31e+05  8.39e+01  1.12e-03  4.56e+02    6.28e+01      0
  30  3.31e+05  7.95e+01  1.03e-03  4.22e+02    8.13e+01      0
  31  3.31e+05  7.96e+01  1.03e-03  4.20e+02    7.91e+01      0
Regularization decrease: 6.913e-02
  32  3.52e+05  7.93e+01  1.03e-03  4.42e+02    9.58e+01      0
  33  3.52e+05  7.86e+01  9.55e-04  4.15e+02    1.02e+02      0
  34  3.52e+05  7.81e+01  9.54e-04  4.14e+02    8.20e+01      0
Regularization decrease: 5.969e-02
  35  3.81e+05  7.82e+01  9.54e-04  4.41e+02    9.95e+01      0
  36  3.81e+05  7.90e+01  8.92e-04  4.18e+02    9.94e+01      0
  37  3.81e+05  7.81e+01  8.91e-04  4.17e+02    8.71e+01      0
Regularization decrease: 4.909e-02
  38  4.12e+05  7.81e+01  8.91e-04  4.45e+02    1.01e+02      0
  39  4.12e+05  7.97e+01  8.41e-04  4.26e+02    1.03e+02      0
  40  4.12e+05  7.85e+01  8.39e-04  4.24e+02    6.73e+01      0
Regularization decrease: 3.719e-02
  41  4.43e+05  7.85e+01  8.39e-04  4.51e+02    8.81e+01      0
  42  4.43e+05  8.01e+01  7.95e-04  4.32e+02    1.04e+02      0
  43  4.43e+05  7.97e+01  7.94e-04  4.32e+02    6.55e+01      0
Reach maximum number of IRLS cycles: 10
------------------------- STOP! -------------------------
1 : |fc-fOld| = 0.0000e+00 <= tolF*(1+|f0|) = 2.8627e+04
1 : |xc-x_last| = 2.9704e-03 <= tolX*(1+|x0|) = 1.0168e-01
0 : |proj(x-g)-x|    = 6.5505e+01 <= tolG          = 1.0000e-01
0 : |proj(x-g)-x|    = 6.5505e+01 <= 1e3*eps       = 1.0000e-02
0 : maxIter   =     100    <= iter          =     44
------------------------- DONE! -------------------------

Now we can plot sections and compare the smooth and compact models with the true solution.

In [15]:
m_true = Mesh.TensorMesh.readModelUBC(mesh, downloads["Initm.sus"])
m_lp[m_lp==-100] = np.nan
m_l2[m_l2==-100] = np.nan
m_true[m_true==-100] = np.nan

fig = plt.figure()
vmin, vmax = 0.0, 0.015
xmin, xmax = -500 + 557300, 500 + 557300
ymin, ymax = -500 + 7133600, 500 + 7133600
zmin, zmax = -500 + 450, 0 + 450
indz = 17
indx = 17

# Axis label
x = np.linspace(xmin+200, xmax-200,3)
y = np.linspace(zmin+50, zmax-50,3)

ax1 = plt.subplot(1,1,1)
pos =  ax1.get_position()
ax1.set_position([pos.x0-0.1, pos.y0+0.3,  pos.width*0.5, pos.height*0.5])
dat = mesh.plotSlice(m_l2, ax = ax1, normal='Z', ind=indz, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.ylabel('Northing (m)')
labels = ax1.get_yticklabels()
plt.setp(labels, rotation=90)

# ax2 = plt.subplot(2,2,3)
pos =  ax1.get_position()
ax2 = fig.add_axes([pos.x0+0.0525, pos.y0 - 0.315,  pos.width*0.725, pos.height])
# ax2.yaxis.set_visible(False)
# ax2.set_position([pos.x0 -0.04 , pos.y0,  pos.width, pos.height])

dat = mesh.plotSlice(m_l2, ax = ax2, normal='Y', ind=indx, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(zmin, zmax)
ax2.set_xticklabels(map(str, map(int, x)),size=12)
plt.xlabel('Easting (m)')
plt.ylabel('Elev. (m)')
ax2.set_yticklabels(map(str, map(int, y)),size=12)
labels = ax2.get_yticklabels()
plt.setp(labels, rotation=90)

## Add compact model
ax3 = fig.add_axes([pos.x0+0.3, pos.y0,  pos.width, pos.height])
dat = mesh.plotSlice(m_lp, ax = ax3, normal='Z', ind=indz, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)

ax4 = fig.add_axes([pos.x0+0.3525, pos.y0 - 0.315,  pos.width*0.725, pos.height])
# ax2.yaxis.set_visible(False)
# ax2.set_position([pos.x0 -0.04 , pos.y0,  pos.width, pos.height])

dat = mesh.plotSlice(m_lp, ax = ax4, normal='Y', ind=indx, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(zmin, zmax)
ax4.set_xticklabels(map(str, map(int, x)),size=12)
# ylabel('Elev. (m)')

## Add True model
ax5 = fig.add_axes([pos.x0+0.6, pos.y0,  pos.width, pos.height])
dat = mesh.plotSlice(m_true, ax = ax5, normal='Z', ind=indz, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)

ax6 = fig.add_axes([pos.x0+0.6525, pos.y0 - 0.315,  pos.width*0.725, pos.height])
# ax2.yaxis.set_visible(False)
# ax2.set_position([pos.x0 -0.04 , pos.y0,  pos.width, pos.height])

dat = mesh.plotSlice(m_true, ax = ax6, normal='Y', ind=indx, clim=np.r_[vmin, vmax],pcolorOpts={'cmap':'viridis'})
#     plt.colorbar(dat[0])
plt.xlim(xmin, xmax)
plt.ylim(zmin, zmax)
ax6.set_xticklabels(map(str, map(int, x)),size=12)
# ylabel('Elev. (m)')

pos =  ax4.get_position()
cbarax = fig.add_axes([pos.x0 , pos.y0+0.05 ,  pos.width, pos.height*0.1])  ## the parameters are the specified position you set
cb = fig.colorbar(dat[0],cax=cbarax, orientation="horizontal", ax = ax4, ticks=np.linspace(vmin,vmax, 4),format='%.3f')
cb.set_label("Susceptibility (SI)",size=14)


We have inverted magnetic field data over a synthetic kimberlite pipe, using both a smooth and compact penalty. The smooth model gives a conservative and robust estimate of the kimberlite pipe location, as well as providing an excellent starting point for the sparse regularization. The compact model on the other hand gives a much closer estimate of susceptibility values and shape of the magnetic anomaly. More details about the scaled IRLS method can be found in this thesis