A new dust cycle model with dynamic vegetation : LPJ-dust version 1 . 0

A new dust cycle model with dynamic vegetation: LPJ-dust version 1.0 S. Shannon and D. J. Lunt Bristol Research Initiative for the Dynamic Global Environment (BRIDGE), School of Geographical Sciences, University Road, University of Bristol, Bristol, BS8 1SS, UK Received: 7 April 2010 – Accepted: 12 April 2010 – Published: 23 April 2010 Correspondence to: S. Shannon (sarah.shannon@bristol.ac.uk) Published by Copernicus Publications on behalf of the European Geosciences Union.


Introduction
Mineral dust plays an interactive role in the Earth's system by modifying the radiation balance (Forster et al., 2007) and transporting nutrients to the terrestrial (Kaufman et al., 2005;Menendez et al., 2007) and marine ecosystems (Coale et al., 2004;Jickells et al., 2005).Observations show that vegetation cover may play a role in constraining dust emissions on seasonal and inter-annual time scales (Zhao, 2004;Lee and Sohn, 2009).In the Sahel, a three way connection between rainfall, vegetation and dust emissions has been suggested, whereby a decrease in precipitation in the Sahel causes a reduction in vegetation cover, which increases dust emissions (Evan et al., 2006).This theory is supported by measurements of Normalised Difference Vegetation Index (NDVI) in the Sahel which show that vegetation cover responds to changes in precipitation (Tucker et al., 1991).Studies have shown that this response occurs relatively quickly.NDVI has been correlated with rainfall for the concurrent month plus the two previous months (Nicholson et al., 1990;Herrmann et al., 2005).Current dust cycle models, however, are unable to simulate this fast response.Two categories of dust cycle models have been developed to date; models which use remote sensing data to describe vegetation cover on the land surface (e.g.Zender et al., 2003;Ginoux et al., 2004;Grini et al., 2005;Cakmur et al., 2006) and models which use vegetation models, typically Equilibrium Biogeography-Biogeochemistry models (BIOME3 or BIOME4) to simulate the distribution of vegetation cover (e.g.Werner et al., 2002;Mahowald et al., 2002;Lunt and Valdes, 2002;Mahowald et al., 1999).The latter category can be used as predictive tools to estimate how the dust loading will change in the future or in the past under different climatic conditions.
Dust cycle models which use BIOME3 or BIOME4 are unable to simulate the interannual variability and seasonality in dust source areas caused by the dynamic response of vegetation cover to the climate.As a consequence, it is not possible to test whether changes in the dust loading are caused by variability in vegetation cover or by other processes.For this reason a new dust cycle model is developed which uses the Lund-Potsdam-Jena dynamic global vegetation model (LPJ) (Sitch et al., 2003) to simulate the dynamic vegetation on the land surface.
As with any numerical model of a physical system, uncertainty in the model results will arise from parametric and structural uncertainty and uncertainty in the input data used to drive the model.Parametric uncertainty in a dust model may be associated with the values for threshold limits for vegetation cover, soil moisture, snow cover and threshold friction velocity used to calculate surface emissions.Lunt and Valdes (2002) showed that the dust loading is very sensitive to the choice of values for these thresholds.For example, they found that increasing the threshold friction velocity from 0.4 to 0.6 m s −1 caused a decrease in the dust loading by a factor of 19.
A way to constrain the threshold limits is to perform a model tuning.One strategy for tuning is to produce an ensemble of models by selecting certain values for model parameters and selecting from these a subset of models which perform well compared to observations.A way to select values for parameters is to use Latin Hypercube Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version Interactive Discussion Sampling (McKay et al., 1979).This approach has been taken by Edwards and Marsh (2005) to tune parameters in a 3-D ocean climate model and by Schneider von Deimling et al. (2006) to tune parameters in the CLIMBER-2 intermediate complexity climate model.The technique divides each tunable parameter into equal intervals (N) of equal probability (1/N).One sample is selected at random from each interval and matched up randomly with a sample selected for another parameter.The advantage of this technique over randomly choosing values is that it ensures that all regions of parameter space are evenly sampled.In this paper, Latin Hypercube Sampling is used to select values for tuneable parameters in the model.A source of structural uncertainty in the model arises from the choice of parameterisation for sub-cloud scavenging.Jung and Shao (2006) examined the characteristics of four different sub-cloud scavenging schemes within the framework of a dust cycle model.They found that the choice of sub cloud scavenging scheme affected the ability of the model to accurately predict surface concentrations of dust at selected locations in Asia.Furthermore, the scavenging coefficient deviated by a factor of 1000 depending on the precipitation rate and particle size.To reduce the structural uncertainty associated with wet deposition three sub-cloud scavenging schemes are tested in this paper as part of the model tuning.This paper presents a description of the new dust cycle model and tuning.The layout of the paper is as follows: in Sect.2, the dust model is described.This includes details of how dust source areas are calculated from LPJ, a description of the dust emission scheme, the chemical transport model and parameterisation of wet and dry deposition.A baseline dust simulation is described in Sect.2.4.The method used for selecting values for threshold parameters is described in Sect.2.5.The three types of sub-cloud scavenging schemes are described in Sect.2.6.The measurement datasets used to evaluate the model performance are described in Sect.2.7.Finally, the results of the model tuning and potential applications of the model are discussed in Sect.3. Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc

Printer-friendly Version
Interactive Discussion

Dust model description
The dust model comprises of three existing models.LPJ (Sitch et al., 2003) is used to calculate the distribution of un-vegetated areas which may act as potential dust sources.This is linked to an existing model which calculates dust emissions by simulating the processes of saltation and sandblasting (Tegen et al., 2002).Dust particles are transported as independent tracers within the TOMCAT chemical transport model (Chipperfield, 2006).Dust is removed from the atmosphere by dry deposition and subcloud scavenging following Lunt and Valdes (2002).The following section describes the three components of the dust model.

Calculation of dust source areas using LPJ
LPJ simulates vegetation dynamics by modeling the atmosphere-vegetation carbon and water fluxes, plant physiology, phenology, establishment and mortality.LPJ calculates daily gross primary production (GPP) by modeling the processes of photosynthesis and transpiration using a coupled photosynthesis and water balance scheme developed in the BIOME3 model (Haxeltine and Prentice, 1996).A fraction of the GPP produced is used for the plant respiration.The remaining fraction known as the net primary production (NPP) is allocated to the leaf, sapwood and fine root carbon pools, satisfying a series of structural constraints.
Vegetation is grouped into ten plant functional types (PFTs) which are categorised according to their plant physiological (C3, C4 photosynthesis), phenological (deciduous, evergreen) and physiognomic (tree, grass) attributes.Plant mortality by fire, heat stress, competition for light and whether there is insufficient carbon to grow is modeled on an annual basis.Every year a proportion of the total vegetation cover decomposes and falls to the surface as litter and new vegetation is established.A set of bioclimatic limits are used to determine if a PFT can survive within a particular temperature range.
The establishment of new PFTs is prohibited when the annual precipitation is less than 100 mm yr −1 .Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version

Interactive Discussion
LPJ is forced using annual mean atmospheric CO 2 and monthly mean precipitation, fractional cloud cover and temperature.In this paper, these are obtained from the Climate Research Unit, University of East Anglia, UK (CRU 2.1), but they could equally be obtained from a model, as in Lunt and Valdes (2002) or Mahowald (2006).
Historical CO 2 data from 1901 to 1995 is obtained from the Carbon Cycle Model Linkage project (McGuire et al., 2001;Cramer et al., 1999).Information on soil texture is taken from the Soil Food and Agriculture Organization United Nations Educational, Scientific and Cultural Organization soil map of the world (Zobler, 1986).This is used to calculate the daily percolation of water from the upper soil layer to the lower soil layer.
LPJ is run on a 0.5×0.5 degree spatial resolution.The simulation begins with no vegetation cover and is allowed to spin up for 1000 years so that the vegetation cover and carbon pools reach equilibrium.This is achieved by forcing the model with the first 30 years of the CRU climate repetitively for 1000 years.After this, the model is forced by 102 years of the CRU climate.

LPJ outputs used to calculate dust source areas
The following variables are output annually and used to calculate monthly dust source areas: 1. Annual foliage projective cover (FPC) The FPC is calculated from FPC PFT , where FPC PFT is the fractional coverage of each PFT in a grid cell.The FPC has a value of 1 if the grid cell is completely covered in vegetation or 0 if no vegetation cover present.The FPC is calculated from the FPC PFT using the following relationship.where CA(PFT) is the crown area and P(PFT) is the population density of the PFT.The crown area is calculated using an empirical relationship between crown area and stem diameter (Zeide, 1993).The FPC PFT is calculated using the following relationship (Monsi and Saeki, 1953).
where LAI PFT is the leaf area index of the PFT which is related to the amount of carbon stored in the leaf.
2. Annual growing degree days base 5 • C (GDD 5 ) GDD 5 is calculated by summing the daily temperatures T d when temperatures are greater than 5 T d is calculated by interpolating monthly temperatures onto a daily time step.

Annual tree height (H)
5 The annual tree height is calculated using the empirical relationship between vegetation height and stem diameter (Huang et al., 1992).
where allom2=40 and allom3=0.5 are constants and D is the stem diameter.
4. Monthly soil moisture in the upper 0.5 m of the soil layer (sm) The soil moisture in LPJ is calculated using a semi-empirical approach which was developed in the BIOME3 model (Haxeltine et al., 1996).The soil is divided into two layers of 0.5 m each.The water held in each layer is calculated daily by Introduction

Conclusions References
Tables Figures

Back Close
Full taking into account the precipitation, snow melt, percolation, evapotranspiration and runoff.The percolation rate is dependent on the soil texture.When the soil layer is at field capacity the excess water is considered to be runoff.The soil water content of the upper layer on any given day is related to the amount of water into the soil layer plus the water out of the soil layer during the previous day.
where "melt" is the snowmelt (mm), "precip" is the precipitation (mm), "perc" is the percolation (mm), "runoff" is the runoff (mm) and β 1 is the rate of transpired water from the upper layer to the lower layer.AET is the calculated evapotranspiration rate for each plant functional type (mm).AWC 1 is the available water holding capacity (mm).

Monthly snow depth (sd)
LPJ calculates monthly snow depth using daily precipitation data which is derived from monthly precipitation that has been interpolated onto a daily time step.When the daily temperature is less than −2 • C, new snow is formed.The magnitude of the snow formed is proportional to the daily precipitation.An adjustment is made to the snow depth to account for the melting of snow.Snow melt occurs when the daily temperature is greater than −2 • C. The amount of melting is related to the temperature by snow melt coefficient taken from the BIOME3 model (Haxeltine et al., 1996).

Monthly fraction of photosynthetically active radiation (mfpar)
The mfpar predicted by LPJ gives an indication of the state and productivity of the vegetation cover.This quantity is defined as the fraction of incoming solar where D phen is the daily leaf-on fraction which is calculated from the accumulated GDD 5 for each PFT.D phen is 1 if the vegetation has leaf cover and 0 if it has no leaf cover.

Creating a biome map of vegetation cover
GDD 5 and H are used to convert FPC into a biome map every year using a scheme adapted from Joos et al. (2004).This conversion is carried out because at high latitudes, LPJ predicts barren land (i.e.FPC=0), combined with low soil moisture and low snow cover which is a criteria for a dust source.This results in a large dust source area in the Canadian Arctic.Creating a biome map allows polar desert, which has low GDD 5 and is not a dust source, to be distinguished from a hot desert which has high GDD 5 and is a dust source.Using this scheme also allows trees with a stand height of less than 4 m to be considered as shrubs.Although this is a simplification, it means that regions with woody PFTs will act as dust sources if productivity is sufficiently low.This is a useful assumption as LPJ does not simulate shrub PFTs.A schematic of the scheme used to create a biome map is shown in Fig. 2. Dust emissions are permitted for regions containing hot desert, dry grass, dry shrubs, tundra grass and tundra shrubs.

Calculating monthly dust source areas
For grass-dominated biomes (tundra grass and dry grass) the area exposed for dust emission is allowed to vary seasonally.The un-vegetated area A grass is linearly Introduction

Conclusions References
Tables Figures

Back Close
Full proportional to the mfpar below a threshold value mfpar lim .
In shrub dominated biomes the area exposed for dust emission remains fixed throughout the year.This is because shrubs are assumed to protect the surface all year round even when no leaves are present.The annual maximum mfpar (mfpar max ) is used as an index for the density of shrubs.For shrub dominated biomes, the area is calculated as This means the dust source area remains constant throughout the year but decreases to zero when the (mfpar max )=1.At high latitudes, dust emissions are suppressed by snow cover.The area exposed for dust emission, A snow , is linearly related to the snow depth (sd) below a threshold value (sd lim ).
The total area available for dust emission is related to area of dry ground that is not covered by vegetation or snow.The erodible area A bare is expressed by the following form

Conclusions References
Tables Figures

Back Close
Full where A grass and A shrub is the contribution of exposed ground from shrub or grass vegetation cover, A snow is the contribution from snow cover and I θ represents the effect of the soil moisture (sm).When sm is above a threshold limit sm lim then I θ is assigned a value of 0 and no dust emissions occur.Conversely, if the soil moisture is below sm lim then I θ has a value of 1 and dust emission will occur.This is the same approach taken 5 by Tegen et al. (2002) to ensure that dust emissions only occur in arid regions.

Calculation of the dust flux
The calculation of the dust flux is taken from the model by Tegen et al. (2002).The model parameterises saltation and sandblasting using the scheme by Marticorena and Bergametti (1995).The horizontal flux G j generated by saltating particles is calculated as where ρ a is the density of air (kg m −3 ), g is the gravitational constant (ms −1 ), u * is the surface wind velocity (m s −1 ) and u * t is the threshold friction velocity (m s −1 ).
s j is used to scale the relative contribution of each size fraction j to the total flux.s j is the surface area covered by a particle size fraction relative to the area convered by the total flux of partiles.The surface covered by each grain is calculated from its basal surface.This is related to the mass (M) of the particle such that, where ρ d is the density of the particle and D p is the particle diameter.The total basal surface is

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version

Interactive Discussion
The relative area covered by each particle fraction is then, S TOTAL (13) u * t in Eq. ( 10) is calculated as a function of particle diameter using a semi-empirical relationship described by Iversen and White (1982).η is a tunable parameter which has a value of 0.66.u * t in Eq. ( 10) has been modified to account for the presence of non-erodible elements such as vegetation cover or rocks which reduces the wind speed momentum.The modification is applied by dividing u * t by the drag partition ratio f eff (Marticorena and Bergametti, 1995) where roughness length of a surface with no obstacles z 0s = 0.001 cm.The roughness length of the surface z 0 is assigned a value of 0.01 cm which is a typical value for level desert (Seinfeld, 1998).
The friction velocity u * in Eq. ( 10) is calculated as a function of surface roughness, such that where k is the Von Karman constant=0.4 (dimensionless), z is the height (m), z0 is the roughness length (m) and u (m s −1 ) is the wind speed.Introduction

Conclusions References
Tables Figures

Back Close
Full the LPJ resolution.1×1 degree wind speed data is interpolated onto the 0.5×0.5 degree resolution by assuming that four adjoining half degree pixels have the same wind speed as a 1 degree pixel.
The vertical flux F is estimated from the horizontal flux by the following where G is the vertical flux determined from Eq. ( 10), A bare is the monthly bare ground fraction which has been calculated from LPJ in Eq. ( 9) and α is the sandblasting mass efficiency.The α values used in the model are taken from Marticorena et al. (1997) (Zobler, 1986).The particle size distribution for each soil texture type is calculated using the following relationship from Tegen et al. (2002) d M(D p ) D p is the particle size, M j is the percentage mass of coarse sand, medium/fine sand, silt or clay, MMD j is the mass median diameter and σ has a value of 2. The values from M j for each soil type are listed in Table 1.

Transport and removal
Dust particles are transported as independent tracers using the chemical transport model TOMCAT (Chipperfield, 2006).TOMCAT is driven by 3-D wind speeds, specific humidity and temperature which can be derived from either meteorological re-analysis data or GCM output.TOMCAT simulates the transport of gaseous or aerosol species via advection, convection and vertical diffusion.
The advection scheme used in TOMCAT is the conservation of second order moments developed by Prather (1986).The Prather advection scheme represents tracer concentration as second-order polynomials within each grid box.This makes the scheme more computationally expensive than simpler schemes, such as the slopes scheme by Russell and Lerner (1981).Although the Prather advection scheme is expensive, it has been shown to have low numerical diffusion, thus providing more accurate results (Ge and Lei, 1998).
Convection is parameterised in TOMCAT using a scheme by developed by Tiedtke (1989).The scheme includes cumulus updrafts in the vertical direction and the exchange of air from inside the cloud to outside the cloud and vice versa.The convective scheme calculates the mass of tracer that is uplifted within a cloud column.Vertical diffusion is parameterised in TOMCAT using a scheme developed by Louis (1979).
TOMCAT is forced using ERA-40 6 hourly 3-D temperature, U and V wind speed and specific humidity fields on a T42 spatial resolution.The model has 31 vertical pressure levels extending from the surface to the stratosphere.Advection, convection, diffusion and dust removal take place on an hourly time step.

Dry deposition
The dry deposition parameterisation consists of gravitational settling and turbulent mixing across the quasi sub-laminar layer.The dry deposition parameterisation is taken from Lunt (2001) which is based on equations for dry deposition described in Seinfeld (1998).The rate of dust removal by dry deposition per unit area per unit time F z is Introduction

Conclusions References
Tables Figures

Back Close
Full proportional to the concentration of dust at a particular height C z and to the deposition velocity v d by the following relationship, The dry deposition process is conceptualised in terms of an electric circuit containing resistance in series.r a is the aerodynamic resistance and r b is the quasi laminar sub layer resistance.The total v d is then The first term on the right hand side of the equation corresponds to the gravitational settling velocity (v s ).The second expression corresponds to the deposition velocity across the quasi laminar sub-layer.
The gravitational settling velocity v s is where ρ p is the density of the particle (kg m −3 ), D p is the particle diameter (m), g is gravitational constant (ms −2 ), µ is the viscosity of air (kg/ms) and C c is the slip correction factor.This relationship is known as Stokes Law.C c becomes important when the particle diameter approaches the same magnitude as the mean free path of air and the medium can no longer be considered a continuum.The slip correction factor is given by where λ is the mean free path of the air (m).Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version

Interactive Discussion
Dust is transported downwards by gravitational settling through each model vertical level with the exception of the lowest level.For simplification it is assumed that particles do not fall though more than one vertical level within one time step.This assumption is valid for the size of particles simulated in the model.
At the lowest model level the resistance of the quasi laminar sub-layer in Eq. ( 19) is defined as where Sc is the Schmidt number which accounts for Brownian motion of very small 5 particles.The Schmidt number Sc is calculated as Sc= ν/D , ν is the kinematic viscosity of air and D is the molecular diffusivity.St is the Stokes number which accounts for inertial impaction for larger size particles.u* is the ERA-40 wind speed in the lowest model level.

Wet deposition 10
Dust is removed from the atmosphere by sub-cloud scavenging.The amount of mass removed is proportional to the precipitation rate and the scavenging coefficient such that, C 0 is the initial tracer mass (kg) and t is the model time step which is one hour.Λ is the scavenging coefficient which has units of h −1 (Seinfeld, 1998).The scavenging coefficient is calculated using the following empirical relationship (Brandt et al., 2002).
where A = 8.4 × 10 −5 and B=0.79 for both convective and large scale precipitation.p z is the large scale or convective precipitation rate (mm h −1 ) at a particular height.p z is Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version Interactive Discussion calculated from the surface precipitation rate (p 0 ) by assuming a vertical precipitation profile.For large scale precipitation, the cloud is divided into an upper and a lower part.
The cloud base assumed to be located at 90 percent of the surface pressure, the cloud middle at about 80 percent and the cloud top at about 50 percent.The precipitation varies linearly in the upper part of the cloud, from zero at cloud top, to a value x, at cloud middle.x is calculated from the medium and low cloud amounts such that where A med and A low are the ERA-40 6 hourly low and medium cloud amounts.
For convective precipitation, the cloud base is assumed to be at a pressure which is 90 percent of surface pressure, and cloud top is assumed to be at the tropopause.The amount of precipitation varies linearly from zero at cloud top to the surface value at the base of the cloud.Figure 4 shows a schematic of the scheme.

A baseline dust simulation
Figure 1 shows a schematic of the dust model.A baseline simulation is run using an arbitrary choice of values for mfpar lim , sd lim , η and sm lim .These parameters will be tuned in the following section.For the baseline simulation the values selected are mfpar lim =0.5, sd lim =0.1 m, η=0.66, sm lim =20 mm. Figure 3 shows a plot of surface emissions and deposition fields.It can be seen that dry deposition is the dominant mechanism for dust removal close to the source regions owing to the abundance of heavy particles close to the source.In addition to this, there is generally a lack of precipitation in these regions which means dry deposition is the prevailing mechanism for removal.In contrast, wet deposition dominates the removal in regions far from the source.The annual mean surface emissions predicted by the un-tuned model averaged over the years 1987-1990 is 1944

Choosing minimum and maximum threshold limits
Threshold limits for mfpar lim , sd lim , η, sm lim are selected using Latin Hypercube Sampling (McKay et al., 1979).To use this technique a sensible minimum and maximum range for each parameter and the total number of experiments must be known.We decided to generate 21 sets of surface emissions.This is comprised of 20 sets of surface emissions calculated using Latin Hypercube Sampling and the un-tuned emissions from the baseline simulation.Each set of surface emissions is combined with three sub-cloud scavenging schemes (see Sect. 2.6).Each set of surface emissions contains 8 tracers resulting in 504 sets of experiments.Running this number of experiments provides a balance between computational expense and coverage of parameter space.
To estimate the minimum and maximum range for the tunable parameters, extreme values for the threshold limits are tested.The model is run multiple times using different values for the threshold limits and compared the emissions from the model of Tegen et al. (2002).Data from the year 1987 is used for comparison.
The mfpar lim range chosen is 0.2-0.5.Choosing values lower than 0.2 leads to very little dust emissions in South America, North America, South Africa and Australia.Choosing an mfpar lim threshold greater than 0.5 leads to dust emissions from highly productive grass lands where C4 grass is present.
The sm lim range chosen was 10 mm to 25 mm.Choosing values lower than 10 mm leads to an under prediction of dust emissions from central Asia, Australia and North America.The upper bound was selected so as to include emissions from the boundaries of the deserts, for example in the Sahel in North Africa.The sd lim threshold limit range chosen is 0.01 m to 0.1 m.Choosing a threshold greater 0.1 m gives rise to an abundance of dust emissions at high latitudes in winter while choosing a threshold smaller than 0.01 m eliminates dust emissions from Gobi desert.
The η range selected is 0.4-1.This is determined on the basis of the total annual Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version Interactive Discussion mean dust generated.Choosing a value of 0.4 for the η gives annual mean dust emissions of 3000 Mt yr −1 which is the upper estimate predicted by other dust modeling studies (Tegen and Fung, 1994;Mahowald et al., 1999).Choosing a value of 1 for the η means the threshold friction velocities are un-tuned.This results in very low annual mean dust emissions of 60 Mt yr −1 .

Sub-cloud scavenging schemes
The un-tuned model uses a sub-cloud scavenging scheme which is independent of the size of the precipitating cloud droplets (Brandt et al., 2002).We test another possible sub-cloud scavenging scheme, in which the scavenging coefficient is calculated as a function of the cloud droplet size.The parameterisation is based on the semi-empirical expression for the aerosol droplet collision efficiency described by Slinn (1983).The collision efficiency is calculated as a function of particle size as, 1998).Re is the Reynolds number, Sc is the Schmidt number, St is the Stokes number, ϕ is the ratio of the particle diameter to the drop diameter and ω is the ratio of the water viscosity to air viscosity.The scavenging coefficient is calculated from the collision coefficient by assuming a monotonic rain droplet diameter, where D droplet is the rain droplet size (mm) and p z is the precipitation rate (mm h −1 ).Λ is calculated for a small size rain droplet with diameter 0.5 mm and a larger rain droplet with diameter 2 mm. Figure 5 shows the scavenging coefficient calculated for the three schemes using a precipitation rate of 1 mm h −1 .The straight line corresponds to the particle size independent sub-cloud scavenging scheme used in the un-tuned model.The particle size dependent removal schemes have a hook shaped curve which indicates that scavenging is efficient for very small and very large particles.For very large particles the process of inertial impact dominates the removal while Brownian diffusion is important for very small particles.However, for particles in the region of 0.1 µm diameter scavenging is not as efficient.This minimum is known as Greenfield gap (Seinfeld, 1998).
The simulations are run for the years 1987-1989 to provide maximum overlap with measurement data (see Sect. 2.7).Data from the first year (1987) is discarded in the analysis as the model is allowed 1 year to spin up.The amount of dust removed by wet and dry deposition and the surface concentrations are output daily.

Target datasets
Three measurement datasets are used to evaluate the performance of the experiments.The first dataset is the Dust Indicators and Records of Terrestrial and MArine Palaeoenvironments (DIRTMAP version 2) (Kohfeld and Harrison, 2001).This dataset contains dust deposition data from ice cores, marine sediment cores, sediment traps and loess data at various locations around the globe.Dust deposition rates obtained from loess deposits are excluded in the analysis because they could potentially act as sources and sinks of dust which would lead to unreliable estimates of deposition rates (Kohfeld and Harrison, 2001).Deposition rates from the DIRTMAP database represents the long term dust deposition over a period of hundreds of years.
The second deposition dataset used for the model validation has been compiled by Ginoux et al. (2001).This dataset set contains deposition rates from measurements Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version Interactive Discussion made in the Pacific ocean and from high resolution ice core records.The measurements are sampled over the years 1955 to 1990.However, 13 out of the 16 measurements have been taken over the period 1980-1990 which coincides with the tuning period.
The third target dataset used are surface concentration measurements from the University of Miami aerosol network (N.Mahowald, Cornell University, personal communication, 2008).The network measures monthly surface concentrations of dust at a number of different sites.The measurement data from this network are not available at all sites over all time periods.However, complete data is available for the year 1989 at: Barbados, Bermuda, Miami, Mace Head, Midway Island and Izana.The annual mean surface concentrations are calculated from monthly mean data and used as target dataset.Figure 6 shows the spatial distribution of the DIRTMAP, Ginoux and University of Miami data.

Results
To evaluate the best experiment in the ensemble, a skill score is used.There are some issues to consider when creating a skills score based on multiple measurement datasets sets.One issue is whether certain datasets should be weighted more than others.For example, if the sampling period of the measurements coincides with the simulation period, then these observations could be considered to be more accurate than measurements averaged over long time periods.The DIRTMAP and Ginoux data represents dust deposition averaged over a period of many years.In contrast, the University of Miami measurements coincide with the simulation period and could therefore be considered more accurate than the deposition data.When calculating the skills score, one approach would be to weight the University of Miami data more than the deposition data.
An alternative strategy is to weight the measurement data according to its measurement error, such that more accurate observations are given more weight.This is not possible because error estimates are not provided with all of the measurement data.

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version

Interactive Discussion
Another approach is to amalgamate the three datasets into one large dataset and use this as a tuning target.The difference in units between the surface concentration data (µm −3 ) and the deposition data (g m −2 yr −1 ) make this approach problematic.
To overcome the issues associated with using multiple observational datasets, the tuning could be carried out using only one target dataset.There are disadvantages 5 to this approach.If the University of Miami data is used, there will be a bias towards experiments that perform well in the North Atlantic.This is because the measurement sites are located in the North Atlantic (see Fig. 6).Including additional measurement datasets provides more locations where the model can be evaluated.In particular, deposition rates obtained from ice cores in DIRTMAP allows the model to be evaluted 10 at high latitudes.This is useful for tuning the wet deposition schemes.
We decide to construct a simple skills score based on the normalised root mean square error (NRMSE) with no arbitrary weighting system applied to the observational data.The NRMSE is calculated as where σ 2 is the variance of the observations and MSE is the mean square error.σ 2 is calculated from where o i is the observed data, µ is the mean of the observations and n is the number of observations.The MSE is where m i is the modelled data.
To calculate the skill score, a global tuning factor is calculated (T ) first.This is the value by which the data is adjusted by to minimize the NRMSE.T acts to move the modeled data up or down so that it fits on the ideal 1:1 line with the least amount of scatter.After applying the tuning factor T , the NRMSE for each experiment is summed to give an total error Q, Table 2 lists the experiments ranked according to the total error score Q.The threshold values for each experiment, the removal scheme and the T-values for each dataset are also listed.The best performing experiments use the size dependent removal scheme with drop diameter 0.5 mm.The best experiment is number 23 as this has the lowest total error.This experiment has threshold limits mfpar lim =0.37, sd lim =0.01 m, η=0.55, sm lim =7.79 mm and uses the size dependent removal scheme with rain droplet diameter 0.5mm.The un-tuned experiment ranks among the worst performing experiments, in 47th place.Although not presented here, the same analysis was carried out using a skills score based on correlation coefficient instead of NRMSE.Experiment number 23 ranked in the top 13% of experiments when correlation coefficient is used as a metric for skill.
Figure 7 shows a comparison between simulated dust deposition rates and the DIRTMAP data.The un-tuned experiment underestimates dust deposition to the North Pacific, Arabian Sea and the North Atlantic which can be seen in the abundance of points below the 1:1 line.This is improved afer the model has been tuned.A comparison with the Ginoux et al. (2001)

Conclusions
A new dust cycle model has been described which uses the LPJ dynamic global vegetation model to identify the distribution of dust source areas.The development of the model has been motivated by the fact that current off line dust models do not simulate dynamic vegetation.The new model has been tuned by producing an ensemble of simulations and using a skills score to select the best performing experiment.We used a simple skills score based on NRMSE while acknowledging that alternative approaches could equally have been taken.The skills score was created using three observational datasets, however, other datasets such as TOMS aerosol index or aerosol optical depth observations from the AERONET network could also be used.
The tuning carried out explored only a small subset of the possible parametric and structural uncertainty in the model but resulted in improved estimates of dust deposition to the North Atlantic, North Pacific, South pacific and the Arabian Sea.
Estimates of the annual mean surface emissions vary depending on which dataset the model was optimised against; 1136 Mt yr  (Prospero and Nees, 1986).This inter-annual variability is not captured in the deposition data.The temporal mismatch between the deposition data and tuning period leads to a larger T .The model data does not need to be adjusted (T =1) when optimising against the University of Miami data because there is no uncertainty caused by inter-annual variability in the observations.The second possible reason for large T values is due to measurement uncertainty in the deposition data.To obtain dust deposition rates from an ice core requires Introduction

Conclusions References
Tables Figures

Back Close
Full knowledge of the ice accumulation rate.Similarly, for marine sediment cores there is uncertainty caused by the techniques used to determine the age of the core.This contributes to uncertainty in estimates of dust deposition rates.
The range of values for annual mean surface emissions is greater than that predicted by Cakmur et al. (2006).They used multiple observational datasets to constrain dust emissions and predicted a range of 1000-3000 Mt yr −1 .They optimised emissions against DIRTMAP, Ginoux and University of Miami data, in addition to aerosol optical thickness from AERONET, TOMS and AVHRR sensors.The deposition and surface concentration data used was different to the data used in this analysis.For the DIRTMAP data, they used sediment trap measurements of ocean deposition compiled by Tegen et al. (2002) while we include ice core and marine core data in the analysis.They also used a subset of the Ginoux data, excluding observations from the Taklimakan, Tel Aviv, Miami and Samoa.Additionally, they used University of Miami data averaged from the early 1980s to the late 1990s, while we used data for the year 1989.These results show that estimates of the annual mean surface emissions are sensitive to the choice of observational data used to constrain the model.
The LPJ-dust model has several potential applications.The model can be used to test whether vegetation changes can explain the observed variability in the dust loading on decadal time scales.This may help us distinguish between natural variability in dust cycle from anthropogenic effects such as land degradation.The model can also be used to study the dust cycle in the past.Ice core records show there has been a 2-25 fold increase in dust deposition rates during glacial periods compared to inter-glacial periods (Lambert et al., 2008).Previous studies have used focused on simulating the dust cycle at the LGM using the BIOME4 model in order to understand the reasons for the high dust loadings (Mahowald et al., 1999;Mahowald, 2006;Werner et al., 2002).The LPJ-dust model could be used to study the impact of dynamic vegetation on the dust loading through a deglaciation period.Likewise, the model can be used to investigate how dust sources will respond in the future with elevated atmospheric CO 2 levels.Modelling studies using BIOME4 have shown that if vegetation cover is Introduction

Conclusions References
Tables Figures

Back Close
Full Screen / Esc Printer-friendly Version Interactive Discussion allowed to respond to elevated CO 2 then dust emissions will decrease in the future (Mahowald and Luo, 2003;Mahowald, 2006Mahowald, , 2007)).Using the LPJ-dust model would make it possible to predict the year to year variability in dust emissions in the future which is not possible using equilibrium vegetation models.(Slinn, 1983) while the fixed line corresponds to the size independent removal scheme (Brandt et al., 2002).A precipitation rate of 1 ℎ −1 is used to calculate the scavenging coefficient for this figure  (Slinn, 1983) while the fixed line corresponds to the size independent removal scheme (Brandt et al., 2002).
A precipitation rate of 1 mm h −1 is used to calculate the scavenging coefficient for this figure.Introduction

Conclusions References
Tables Figures

Back Close
Full

PFT=10PFT=1FPC
= CA(PFT).P(PFT).FPC PFT ( data is shown in Fig.8.The model underestimates dust deposition to the North Pacific, South Pacific and North Atlantic which is also improved after tuning the model.Similarly, the tuned model produces better estimates of surface concentrations in the North Atlantic than the un-tuned model as seen in Fig.9 −1 (T =1, University of Miami), 3065 Mt yr −1 (T =2.7, DIRTMAP) and 4654 Mt yr −1 (T =4.1, Ginoux).There are two possible explanations why T is large for the deposition data.The first is that the deposition data represents dust deposition averaged over long time periods.In the case of the Ginoux data, the sampling period extends from 1950 to 1990.Measurements in the DIRTMAP dataset span hundreds of years.Observations show there is significant inter-annual variability in the dust loading.Dust concentrations measured at Barbados increased four fold between 1960 and 1980

Fig. 5 .
Fig. 5. Comparison between the scavenging coefficients for three different wet deposition schemes.The dashed lines correspond to the size dependent removal schemes(Slinn, 1983) while the fixed line corresponds to the size independent removal scheme(Brandt et al., 2002).A precipitation rate of 1 mm h −1 is used to calculate the scavenging coefficient for this figure.
who summarise the experimental values for different soil types.αfordifferentsoil types are listed in Table1.Information on the particle size distribution comes from the Soil Food and Agriculture Organization United Nations Educational, Scientific and Cultural Organization soil map of the world

Table 1 .
Relationship between climatic factors and dust storm frequency in Inner Mongolia ofChina, Geophys.Res.Lett., 31, L01103, 0094-8276, 2004.474Zobler,L.:Aworld soil file for global climate modeling, Tech.Rep.TM87802, NASA, 1986.478,IntroductionColumn 2 contains the sandblasting mass efficiency values for different soil textures.Columns 3 to 6 contain the relative mass of the main soil types for each soil texture.These values are used to calculate the particle size distribution in Eq. (17).

Table 2 .
Tuning experiments ranked according to the total error.Threshold limits used to determine surface emissions, the tuning factor T and the removal schemes are also listed.The best performing experiment is number 23 which has the lowest total error.Experiment 1 corresponds to the un-tuned model configuration.