Commit 9dd2d5a3 authored by GODARD Vincent's avatar GODARD Vincent
Browse files

dev app

parent 5f6896dd
title: 'R Notebook : introducing the Eulerian solver for concentration calculations'
title: 'Introducing the Eulerian solver for concentration calculations'
df_print: paged
# R Notebook usage
This is an [R Markdown]( Notebook. When you execute code within the notebook, the results appear beneath the code.
Execute a chunk of code by clicking the *Run* button within the chunk or by placing your cursor inside it and pressing *Ctrl+Shift+Enter*.
You can edit code inside the chunk, for example to change parameters values.
Click on preview to visualize the Notebook (*Viewer* tab on the right panel)
# Objectives of this Notebook
## Background
title: "Solvers for Eulerian and Lagrangian descriptions"
output: html_notebook
# Objectives
This notebook presents a simple comparison of the Eulerian and Lagrangian approaches for the computation of cosmogenic nuclides concentration evolution in complex denudation and exposure scenarios.
# Set up of the simulation
We load the `TCNtools` package.
## Definition of parameters
We should define the basic parameters we are going to use for the computation, which are two vectors :
- a vector with the attenuation lengths for different particules (in g/cm$^2$)
- neutrons for spallation reactions $\Lambda_{spal}$
- stopping muons $\Lambda_{stop}$
- fast muons $\Lambda_{fast}$
- a vector (or matrix) with the SLHL production rates (in at/g/a), in this case for the *st* scaling scheme (@stone2000air), and decay constant $\lambda$ (in 1/a) for the nuclide(s) of interest.
```{r ck_2}
# Attenuation lengths
Lambda = c(160,1500,4320) # g/cm2
names(Lambda) <- c("Lspal","Lstop","Lfast") # we just give names to the element of the vector
# Production and decay parameters
prm = matrix(c( 4.01 , 0.012 , 0.039 , log(2)/1.36e6,
27.93 , 0.84 , 0.081 , log(2)/0.717e6),
nrow = 4,ncol=2 )
colnames(prm) <- c("Be10","Al26") # we just give names to the columns of the matrix
rownames(prm) <- c("Pspal","Pstop","Pfast","lambda") # we just give names to the rows of the matrix
# material density
rho = 2.7
We also need to define the properties of our site of interest and compute the relevant scaling parameters.
```{r }
altitude = 1000 # elevation in m
latitude = 45 # latitude in degrees
P = atm_pressure(alt=altitude,model="stone2000") # compute atmospheric pressure at site
S = scaling_st(P,latitude) # compute the scaling parameters according to Stone (2000)
## Definition of the denudation scenario
We will start from a steady state situation under constant denudation, and then apply a change in the denudation rate for a given period of time.
tmax = 2e6 # duration in a
ero = 1/1.e6*100*rho # denudation rate conversion en m/Ma -> g/cm2/a
fact = 5 # change factor
Here the duration of the simulation will be `r tmax/1e6` Ma, the initial denudation rate will be `r ero*1e6/100/rho` m/Ma, which will changed by a factor `r fact` at the beggining of the simulation.
# Computation of concentrations
## Eulerian reference frame
We compute the evolution of concentrations at the surface ($z=0$) using an Eulerian point of view.
The calculation is very straightforward thanks to the `solv_conc_eul` function.
df_e = data.frame(t=seq(0,tmax,length.out = 10000))
## Lagrangian reference frame
Now we do the same calcuation using a Lagrangian point of view, i.e. with a reference frame attached to a rock particule during its journey toward the surface as a response to denudation. It is slightly more complicated than in the Eulerian case. This is done using the `solv_conc_lag` function.
We first initiate a dataframe (`df_l`) and compute the evolution in depth `z` of the particule through time when it is exhumed from its initial depth (here `r ero*fact*tmax/rho/100` m) to the surface at a rate `r ero*fact/rho/100*1e6` m/Ma.
df_l = data.frame(t=seq(0,tmax,length.out = 10000))
df_l$z = ero*fact*df_l$t # g/cm2
df_l$z = max(df_l$z) - df_l$z
We compute the steady state cocnentration at starting depth, unsing the Eulerian solver.
C10_0 = solv_conc_eul(max(df_l$z),ero,Inf,0,prm[,'Be10'],S,Lambda)
C26_0 = solv_conc_eul(max(df_l$z),ero,Inf,0,prm[,'Al26'],S,Lambda)
We then define another column with the evolution of scaling parameters through time. It is trivial here in the case of the time-independent **st**
scaling, but it will be highly valuable when dealing with time-dependent scalings, where the Eulerian approach is not applicable.
df_l$Ssp = rep(as.numeric(S[1]),nrow(df_l))
df_l$Smu = rep(as.numeric(S[2]),nrow(df_l))
Then we calculate the un-scaled production rates at depths the depths of interest. Here again, as we use simple models of exponential decrease for production with depth, the interest of using such Lagrangian approach is not obvious. But such way of computing concentration will allow to deal with non-exponential production profiles, which might necessary in some situation for muons or low-energy neutrons.
# 10Be (not scaled)
df_l$Psp10 = prm[1,'Be10']*exp(-1*df_l$z/Lambda[1]) # spallation
df_l$Pmu10 = prm[2,'Be10']*exp(-1*df_l$z/Lambda[2]) + prm[3,'Be10']*exp(-1*df_l$z/Lambda[3]) # muons
# 26Al (not scaled)
df_l$Psp26 = prm[1,'Al26']*exp(-1*df_l$z/Lambda[1]) # spallation
df_l$Pmu26 = prm[2,'Al26']*exp(-1*df_l$z/Lambda[2]) + prm[3,'Al26']*exp(-1*df_l$z/Lambda[3]) # muons
We can now use the `solv_conc_lag` function to compute the evolution of concentration through time and depth.
df_l$C10 = solv_conc_lag(df_l$t,df_l$z,C10_0,df_l$Psp10,df_l$Pmu10,prm[4,'Be10'],cbind(df_l$Ssp,df_l$Smu),final=FALSE)
df_l$C26 = solv_conc_lag(df_l$t,df_l$z,C26_0,df_l$Psp26,df_l$Pmu26,prm[4,'Al26'],cbind(df_l$Ssp,df_l$Smu),final=FALSE)
# Comparison of results
We can now plot the results of the calculations and compare the two approaches. First for $^{10}$Be.
col_e = "chartreuse"
col_l = "chocolate1"
plot(df_e$t/1e6,df_e$C10/1e6,type="l",col=col_e,lwd=2, xlim=range(df_e$t/1e6),ylim=range(df_e$C10/1e6,df_l$C10/1e6),
xlab="Time (Ma)",ylab="10Be concentration (x10e6 at/g)")
The Lagragian evolution starts with a low concentration corresponding to the burial at depth of the particle at the onset of the evolution, while at the same time ($t=0$) we are looking at the surface with the Eulerian point of view, and then obviously observe much higher concentrations.
The two descriptions converge toward the same cocnentration at the end of the evolution (i.e. when the lagrangian particule reaches the surface).
We can also represent the corresponding two nuclides plot for $^{10}$Be and $^{26}$Al.
We need first to compute the steady-state erosion and constant exposure lines.
# steady state erosion lines
erosion = 10^seq(log10(0.0001),log10(2000),length.out = 100)/1.e6*100*rho # m/Ma -> g/cm2/a
C10Be = solv_conc_eul(0,erosion,Inf,0,prm[,'Be10'],S,Lambda)
C26Al = solv_conc_eul(0,erosion,Inf,0,prm[,'Al26'],S,Lambda)
ss_erosion =,C10Be,C26Al))
# plot banana line ss exposure
time = 10^seq(log10(100),log10(200e6),length.out = 100) # a
C10Be = solv_conc_eul(0,0,time,0,prm[,'Be10'],S,Lambda)
C26Al = solv_conc_eul(0,0,time,0,prm[,'Al26'],S,Lambda)
ss_exposure =,C10Be,C26Al))
And then plot.
plot(NA, xlim=range(df_e$C10/1e6,df_l$C10/1e6),log="x", ylim=range(df_e$C26/df_e$C10,df_l$C26/df_l$C10),
xlab="10Be concentration (x10e6 at/g)",ylab="26Al/10Be")
Note that the steady-state erosion and constant exposure lines are only relevant for the Eulerian trajectory.
#df_l$ratio = (df_l$Psp10+df_l$Pmu10)/(df_l$Psp26+df_l$Pmu26)
title: 'Scaling 1 : time-independent'
date: "`r Sys.Date()`"
author: 'Vincent Godard'
bibliography: '`r system.file("tcntools.bib",package = "TCNtools") `'
toc: true
toc_float: true
number_sections: yes
df_print: paged
code_folding: show
code_download: true
# Introduction
## How to use this document?
This html page is derived from an [R Markdown]( Notebook.
You can copy/paste the various lines of code into your own R script and run it in any R session.
Alternatively you can also download the Notebook (upper-right menu on this page), and open that into RStudio IDE.
You can execute a chunk of code by clicking the *Run* button within the chunk or by placing your cursor inside it and pressing *Ctrl+Shift+Enter*.
When you execute code within the notebook, the results appear beneath the code.
You can modifiy the code (value of parameters, etc ...) and see how the outputs are affected.
## Objectives
We are going to present the most widely used and simplest scaling scheme known as Lal-Stone and often abbreviated as **st**.
The main equations are presented in the reference article by @stone2000air .
Our goal is to describe the variations of these scaling factors with latitude and elevation, which are the main parameters controlling the production rates of cosmogenic nuclides at the Earth surface.
The first thing we have to do is to load the `TCNtools` library (once it has been installed).
# Simple calculation
## Site characteristics
We first need to define some parameters concerning the site of interest :
- latitude `lat` in degrees
- altitude `z` in meters (can be a vector or a scalar)
- longitude `lon` in degrees, this is not used for *st* scaling (@stone2000air), just in case we want to compute atmospheric pressure according to ERA40 (@uppala2005era40).
lat = 30 # latitude
lon = 30 # longitude
z = seq(0,3000,by=100) # vector from 0 to 3000 m by 100 m increments
Now we can compute the atmospheric pressure, with the function `atm_pressure` according to the two models available, and then plot for comparison.
Here `z` is a vector to see the variations over a range of elevations.
To get information about the usage of the function used here (for example what are the different models) type `?atm_pressure` in the R console.
plot(P1,z,type="l",xlab="Pressure (hPa)",ylab="Altitude (m)")
legend("topright",c("Stone 2000","ERA40"),lty=c(1,2))
## Computation of scaling factors
We can now compute the scaling factors according to @stone2000air.
Same as above, to get some information about the function (parameters definition) type `?st_scaling` in the R console.
st = scaling_st(P1,lat) # here we use the pressure according to Stone 2000 model
The result is stored in `st` as a dataframe with as many rows as there are elements in the input pressure vector (`P1`) and two columns named `Nneutrons` and `Nmuons`, for the spallogenic and muogenic contributions, respectively.
We can plot the evolution with elevation, wich illustrates the major influence of altitude of the sampling site in controlling the local production rate.
xlab="Spallogenic st scaling factor (Stone 2000)",ylab="Altitude (m)",
main=paste("Latitude ",lat,"°",sep=""))
# Global variations
In order to get a better idea of the variations with both latitude (from 0 to 90°) and elevation (from sea level to 3000 m) we can try the following plot.
P=atm_pressure(alt=0,model="stone2000") # compute pressure
lat = seq(0,90,by=1) # latitude vector
n = length(lat) # size of vector
st = scaling_st(P,lat) # compute scaling
xlab="Latitude (°)",ylab="Spallogenic st scaling factor (Stone 2000)")
text(lat[n],st$Nneutrons[n],"0 km",cex=0.5,adj=0) # put label at the end of curve
for (z in seq(500,3000,by=500)){ # loop on elevations : same as above for a range of elevations
st = scaling_st(P,lat)
This dependance of the scaling factor on latitude is a direct consequence of the dipole nature structure of the Earth magnetic field, with a higher cosmic rays flux at high latitude.
# References
solv_conc_eul <- function(z,ero,t,C0,p,S,L,in_ero=NULL){
p = as.numeric(p)
L = as.numeric(L)
S = as.numeric(S)
# concentration acquired over the time increment
Cspal = (S[1]*p[1])/((ero/L[1])+p[4])*exp(-1*z/L[1])*(1-exp(-1*(p[4]+(ero/L[1]))*t))
Cstop = (S[2]*p[2])/((ero/L[2])+p[4])*exp(-1*z/L[2])*(1-exp(-1*(p[4]+(ero/L[2]))*t))
Cfast = (S[2]*p[3])/((ero/L[3])+p[4])*exp(-1*z/L[3])*(1-exp(-1*(p[4]+(ero/L[3]))*t))
C_produced = Cspal + Cstop + Cfast
if (is.null(in_ero)) { # inheritance constant with depth, defined by the C0 value
C_inherited = C0*exp(-1*p[4]*t)
Cspal_ss = (S[1]*p[1])/((in_ero/L[1])+p[4])*exp(-1*ero*t/L[1])
Cstop_ss = (S[2]*p[2])/((in_ero/L[2])+p[4])*exp(-1*ero*t/L[2])
Cfast_ss = (S[2]*p[3])/((in_ero/L[3])+p[4])*exp(-1*ero*t/L[3])
C_inherited = (Cspal_ss + Cstop_ss + Cfast_ss)*exp(-1*p[4]*t)
return(C_produced + C_inherited)
Nneutrons=a + (b * exp(-1*P/150.)) + (c*P) + (d*P^2) + (e*P^3)
Nmuons=M*exp((1013.25 - P)/242)
res = data.frame(Nneutrons,Nmuons)
Ps=1013.25 # sea level pressure (hPa)
Ts=288.15 # sea level temperature (K)
ksi=0.0065 # adiabatic lapse rate (K/m)
gMR=0.03417 # g*M/R (K/m)
#hydrostatic pressure (equation 1 of Stone 2000)
P=Ps*exp(-1*gMR/ksi*(log(Ts) - log(Ts - ksi* alt)))
solv_ss_eros_eul <- function(Cobs,p,S,L,Cobs_e=0,method="single",n=10000){
Cmax = solv_conc_eul(0,0,Inf,0,p,S,L)
if (Cobs>Cmax){stop("Observed concentration higher than theoretical maximum")}
if (!(method %in% c("single","MC","MCMC"))){stop("Parameter method should be one of single, MC or MCMC")}
if (method!="single" & Cobs_e==0){stop("Need to define the error on the observed concentration")}
if (method == "single") {
res = uniroot(fun_solv_ss_eros_eul,c(0,1),Cobs,p,S,L,tol=1e-5)
} else if (method == "MC") {
C0 = rtruncnorm(n,a=0,mean=Cobs,sd=Cobs_e)
ero = rep(NA,n)
val = rep(NA,n)
iter = rep(NA,n)
prec = rep(NA,n)
for (i in 1:n){
a = uniroot(fun_solv_ss_eros_eul,c(0,1),C0[i],p,S,L,tol=1e-6) # attention à la borne sup de l'interval que le taux d'érosion max soit assez rapide pour les cocentrations faibles
ero[i] = a$root
val[i] = a$f.root
iter[i] = a$iter
prec[i] = a$estim.prec
res = data.frame(C,ero,val,iter,prec)
} else if (method == "MCMC") {
ero0 = uniroot(fun_solv_ss_eros_eul,c(0,0.1),Cobs,p,S,L)$root
binf = max(0,ero0*(1-10*Cobs_e/Cobs))
bsup = ero0*(1+10*Cobs_e/Cobs)
res = run_mcmc1(p,S,L,Cobs,Cobs_e,binf,bsup,n)
fun_solv_ss_eros_eul <- function(ero,Cobs,p,S,L){
C = solv_conc_eul(0,ero,4.55e9,0,p,S,L)
# Define UI for app that draws a histogram ----
ui <- fluidPage(
# img(src='amu.png', align = "right", width=100),
# App title ----
titlePanel("Transient erosion response"),
# Sidebar layout with input and output definitions ----
# Sidebar panel for inputs ----
# input radio button
radioButtons(inputId = "N1",
label = "Nuclide 1:",
inline = TRUE,
choices = c("10Be" = "Be10",
"26Al" = "Al26",
"14C" = "C14"),
selected = "Be10"),
# input radio button
radioButtons(inputId = "N2",
label = "Nuclide 2:",
inline = TRUE,
choices = c("10Be" = "Be10",
"26Al" = "Al26",
"14C" = "C14"),
selected = "C14"),
sliderInput(inputId = "lat",
label = "Latitude (°):",
min = 0,
max = 90,
step = 1,
value = 40),
sliderInput(inputId = "alt",
label = "Altitude (m):",
min = 0,
max = 5000,
step = 100,
value = 2000),
label = "Duration (ka):",
selected=50, grid = T),
label = "Initial denudation rate (m/Ma):",
selected=50, grid = T),
sliderInput(inputId = "fact",
label = "Factor of variation :",
min = 1,
max = 10,
step = 1,
value = 2),
radioButtons(inputId = "variation",
label = "Type of variation :",
inline = TRUE,
choices = c("Increase" = 1,
"Decrease" = -1))
# Main panel for displaying outputs ----
# Output: ----
plotOutput(outputId = "distPlot"),
# plotOutput(outputId = "distPlot2")
# Define server logic required to draw a histogram ----
server <- function(input, output) {
output$distPlot <- renderPlot({
#unité longeurs: cm
#unité poids: g
#unité temps: an
# Attenuation lengths
Lambda = c(160,1500,4320) # g/cm2
names(Lambda) <- c("Lspal","Lstop","Lfast") # we just give names to the element of the vector
# Production and decay parameters
prm = matrix(c( 4.01 , 0.012 , 0.039 , log(2)/1.36e6,
27.93 , 0.84 , 0.081 , log(2)/0.717e6,
12.3 , 3.31 , 0 ,log(2)/5730),
nrow = 4,ncol=3 )
colnames(prm) <- c("Be10","Al26","C14") # we just give names to the columns of the matrix
rownames(prm) <- c("Pspal","Pstop","Pfast","lambda") # we just give names to the rows of the matrix
# material density
rho = 2.7
col1 = "blue"
col2 = "red"
#récupération des variables
N1 = input$N1
N2 = input$N2
altitude = input$alt
latitude = input$lat
tmax = input$tmax*1e3
ero = input$ero*100/1e6*rho # m/Ma -> g/cm2/a
fact = input$fact^as.numeric(input$variation)
time = seq(0,tmax,length.out = 100)
P = atm_pressure(alt=altitude,model="stone2000") # compute atmospheric pressure at site
st = scaling_st(P,latitude) # compute the scaling parameters according to Stone (2000)
layout(matrix(c(1,2), nrow = 1, ncol = 2, byrow = TRUE), widths =c(1,1))
C1 = solv_conc_eul(0,ero*fact,time,0,prm[,N1],st,Lambda,in_ero=ero) # compute concentration N1
C2 = solv_conc_eul(0,ero*fact,time,0,prm[,N2],st,Lambda,in_ero=ero) # compute concentration N2
plot(time/1e3,C1/1e3,ylim=range(C1,C2)/1000,type="l",col=col1,lwd=2,xlab="Time (ka)",ylab="Concentration (x1000 at/g)")
ero1 = rep(0,length(C1))
ero2 = rep(0,length(C2))
for (i in 1:length(C1)){
ero1[i] = solv_ss_eros_eul(C1[i],prm[,N1],st,Lambda)$root/100*1e6/rho
ero2[i] = solv_ss_eros_eul(C2[i],prm[,N2],st,Lambda)$root/100*1e6/rho
plot(time/1e3,ero1,ylim=range(ero1,ero2,c(ero,ero*fact)/100*1e6/rho),type="l",col=col1,lwd=2,xlab="Time (ka)",ylab="Denudation rate (m/Ma)")
# Create Shiny app ----
shinyApp(ui = ui, server = server)
abstract = {ERA-40 is a re-analysis of meteorological observations from September 1957 to August 2002 produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) in collaboration with many institutions. The observing system changed considerably over this re-analysis period, with assimilable data provided by a succession of satellite-borne instruments from the 1970s onwards, supplemented by increasing numbers of observations from aircraft, ocean-buoys and other surface platforms, but with a declining number of radiosonde ascents since the late 1980s. The observations used in ERA-40 were accumulated from many sources. The first part of this paper describes the data acquisition and the principal changes in data type and coverage over the period. It also describes the data assimilation system used for ERA-40. This benefited from many of the changes introduced into operational forecasting since the mid-1990s, when the systems used for the 15-year ECMWF re-analysis (ERA-15) and the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis were implemented. Several of the improvements are discussed. General aspects of the production of the analyses are also summarized. A number of results indicative of the overall performance of the data assimilation system, and implicitly of the observing system, are presented and discussed. The comparison of background (short-range) forecasts and analyses with observations, the consistency of the global mass budget, the magnitude of differences between analysis and background fields and the accuracy of medium-range forecasts run from the ERA-40 analyses are illustrated. Several results demonstrate the marked improvement that was made to the observing system for the southern hemisphere in the 1970s, particularly towards the end of the decade. In contrast, the synoptic quality of the analysis for the northern hemisphere is sufficient to provide forecasts that remain skilful well into the medium range for all years. Two particular problems are also examined: excessive precipitation over tropical oceans and a too strong Brewer-Dobson circulation, both of which are pronounced in later years. Several other aspects of the quality of the re-analyses revealed by monitoring and validation studies are summarized. Expectations that the 'second-generation' ERA-40 re-analysis would provide products that are better than those from the first-generation ERA-15 and NCEP/NCAR re-analyses are found to have been met in most cases. {\textcopyright} Royal Meteorological Society, 2005.},
author = {Uppala, S. M. and KÅllberg, P. W. and Simmons, Adrian J. and Andrae, U. and Bechtold, V. Da Costa and Fiorino, M. and Gibson, J. K. and Haseler, J. and Hernandez, A. and Kelly, G. A. and Li, X. and Onogi, K. and Saarinen, S. and Sokka, N. and Allan, R. P. and Andersson, E. and Arpe, K. and Balmaseda, M. A. and Beljaars, A. C. M. and Berg, L. Van De and Bidlot, J. and Bormann, N. and Caires, S. and Chevallier, F. and Dethof, A. and Dragosavac, M. and Fisher, M. and Fuentes, M. and Hagemann, S. and H{\'{o}}lm, E. and Hoskins, B. J. and Isaksen, L. and Janssen, P. A. E. M. and Jenne, R. and Mcnally, A. P. and Mahfouf, J.-F. and Morcrette, J.-J. and Rayner, N. A. and Saunders, R. W. and Simon, P. and Sterl, A. and Trenberth, K. E. and Untch, A. and Vasiljevic, D. and Viterbo, P. and Woollen, J.},
doi = {10.1256/qj.04.176},
issn = {00359009},
journal = {Quarterly Journal of the Royal Meteorological Society},
keywords = {Data assimilation,Numerical weather prediction,Observing system},
month = {oct},
number = {612},
pages = {2961--3012},
publisher = {John Wiley {\&} Sons, Ltd},
title = {{The ERA-40 re-analysis}},
url = {},
volume = {131},
year = {2005}
abstract = {The cosmic ray flux increases at higher altitude as air pressure and the shielding effect of the atmosphere decrease. Altitude-dependent scaling factors are required to compensate for this effect in calculating cosmic ray exposure ages. Scaling factors in current use assume a uniform relationship between altitude and atmospheric pressure over the Earth's surface. This masks regional differences in mean annual pressure and spatial variation in cosmogenic isotope production rates. Outside Antarctica, air pressures over land depart from the standard atmosphere by ±4.4 hPa (1$\sigma$) near sea level, corresponding to offsets of ±3–4{\%} in isotope production rates. Greater offsets occur in regions of persistent high and low pressure such as Siberia and Iceland, where conventional scaling factors predict production rates in error by ±10{\%}. The largest deviations occur over Antarctica where ground level pressures are 20–40 hPa lower than the standard atmosphere at all altitudes. Isotope production rates in Antarctica are therefore 25–30{\%} higher than values calculated by scaling Northern Hemisphere production rates with conventional scaling factors. Exposure ages of old Antarctic surfaces, especially those based on cosmogenic radionuclides at levels close to saturation, may be millions of years younger than published estimates.},
author = {Stone, John O.},
......@@ -36,7 +52,7 @@ year = {2014}