Numerical Modelling of Soot Formation
Transcript of Numerical Modelling of Soot Formation
Numerical Modelling of SootFormation
Robert Iain Arthur Patterson
Trinity College
A dissertation submitted for the degree of Doctor of Philosophy at the
University of Cambridge
April 2007
Preface
This dissertation is the result of my own work and includes nothing which is theoutcome of work done in collaboration, except where specifically indicated in thetext. The work presented was undertaken at the Department of Chemical Engi-neering, University of Cambridge, between October 2003 and March 2007. Chap-ters 1, 2 & 3 of this dissertation include work from the dissertation I submitted inJune 2004 for a Certificate of Postgraduate Study. No other part of this thesishas been submitted for a degree to this or any other university. This dissertationcontains approximately 35,000 words and 27 figures.
Some of the work in this dissertation has been published:
1. R. I. A. Patterson, J. Singh, M. Balthasar, M. Kraft, and J. R. Norris, “TheLinear Process Deferment Algorithm: A new technique for solving popula-tion balance equations”, SIAM Journal on Scientific Computing, 28, 303-320, (2006).
2. R. I. A. Patterson, J. Singh, M. Balthasar, M. Kraft, and W. Wagner, “Ex-tending stochastic soot simulation to higher pressures”, Combustion andFlame, 145, 638-642, (2006).
3. R. I. A. Patterson, and M. Kraft, “A simple model for the aggregate structureof soot particles”, Combustion and Flame, in press (2007).
R I A Patterson
11 April 2007 (approved corrections 2 June 2007)
1
Summary
This thesis presents developments and applications of Monte Carlo algorithms forthe simulation of soot formation in premixed laminar flames. The thesis beginswith a brief review of experimental data on soot formation, models for the growthof soot and possible simulation techniques. The introduction concludes with amathematical formulation of the soot growth problems that the thesis addresses.
The development of a new algorithm, to incorporate a model of soot particlecoagulation at intermediate pressures into Monte Carlo simulations, marks thebeginning of the numerical work. Simulations using this algorithm are validatedin detail and the method thus established forms the base for the calculations in themajority of the thesis.
Investigations of the performance of the Monte Carlo simulations are thendescribed. To address the problems highlighted by these investigations, a newprobabilistic method for the simulation of chemical reactions on the surface ofsoot particles is introduced and tested. It is shown to reduce computational timesby a factor of up to a thousand, so that high quality calculations can be performedin very few minutes. A range of deterministic alternatives to this method are alsoinvestigated.
The power and flexibility of the accelerated algorithm is then used for modeldevelopment. Simple new models for the aggregate structure of soot particles areanalysed, a range of different flames are considered and it is found that numericalresults are only mildly sensitive to the details of particle shape models.
Finally, a new weighted particle Monte Carlo simulation algorithm is derived.This generalises ideas from work by other authors, which have been found tooffer computational savings and increased precision. The algorithm offers extraflexibility for use in problems with particle inflow and outflow and allows forcomputational effort to be concentrated on important classes of particles. It isapplied to a range of problems considered earlier in the thesis and is found to beof similar efficiency to the fully developed version of the original Monte Carloalgorithm.
2
Contents
1 Introduction 71.1 Experimental Investigations of Soot . . . . . . . . . . . . . . . . 8
1.1.1 Collection and Examination of Soot Particles . . . . . . . 9
1.1.2 In-Situ Light Scattering and Absorption . . . . . . . . . . 10
1.1.3 Other scattering . . . . . . . . . . . . . . . . . . . . . . . 12
1.1.4 On-line Particle Sampling . . . . . . . . . . . . . . . . . 12
1.2 Combustion Chemistry . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.1 Polyaromatic Hydrocarbon Growth . . . . . . . . . . . . 15
1.3 Population Balance Modelling . . . . . . . . . . . . . . . . . . . 17
1.3.1 Explicit Discretisation of Type Space . . . . . . . . . . . 18
1.3.2 Integral Methods . . . . . . . . . . . . . . . . . . . . . . 25
1.3.3 Stochastic Particle Methods . . . . . . . . . . . . . . . . 28
1.4 Mathematical Framework . . . . . . . . . . . . . . . . . . . . . . 30
1.4.1 Models for individual soot particles . . . . . . . . . . . . 32
2 Simulation at Higher Pressures 352.1 Coagulation Kernel . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.1.1 Majorant Kernels . . . . . . . . . . . . . . . . . . . . . . 37
2.1.2 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.2 Application to Flames . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3 Implementation of DSA . . . . . . . . . . . . . . . . . . . . . . . 45
2.3.1 Variation in Number of Particles . . . . . . . . . . . . . . 45
2.3.2 Binary Tree . . . . . . . . . . . . . . . . . . . . . . . . . 46
3
2.3.3 Complexity of the implementation . . . . . . . . . . . . . 49
2.3.4 DSA Applied to Flames . . . . . . . . . . . . . . . . . . 50
2.3.5 Profiling of DSA Simulation . . . . . . . . . . . . . . . . 51
3 Accelerating the Surface Processes 553.1 Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.1.1 Mathematical Outline of Splitting . . . . . . . . . . . . . 56
3.1.2 Implementation of Operator Splitting . . . . . . . . . . . 57
3.2 Numerical Results with Splitting . . . . . . . . . . . . . . . . . . 59
3.2.1 Comparison of JW1.69 Moments . . . . . . . . . . . . . 59
3.2.2 Particle Distribution Accuracy . . . . . . . . . . . . . . . 62
3.2.3 Profiling of Operator Splitting . . . . . . . . . . . . . . . 63
3.2.4 Conclusions on Operator Splitting . . . . . . . . . . . . . 65
3.3 Deferment of Surface Reactions . . . . . . . . . . . . . . . . . . 65
3.3.1 Measure Theoretic Formulation . . . . . . . . . . . . . . 66
3.3.2 Deferred Surface Process Operator . . . . . . . . . . . . . 67
3.3.3 Rate Kernel . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.3.4 Implementation of Deferment . . . . . . . . . . . . . . . 72
3.3.5 Numerical Results . . . . . . . . . . . . . . . . . . . . . 73
3.3.6 Conclusions on LPDA . . . . . . . . . . . . . . . . . . . 75
3.4 Comparison of Simulation Methods . . . . . . . . . . . . . . . . 75
3.4.1 Equal Numbers of Computational Particles . . . . . . . . 75
3.4.2 Equal Precision for JW1.69 . . . . . . . . . . . . . . . . 77
3.4.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4 Deterministic Simulation of the Surface Processes 794.1 Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.1.1 Modified Choice of Top Level Time Step . . . . . . . . . 80
4.1.2 Elimination of second level of splitting . . . . . . . . . . 81
4.1.3 Initial Trial of Deterministic Method . . . . . . . . . . . . 81
4.1.4 Adaptive splitting . . . . . . . . . . . . . . . . . . . . . . 84
4.1.5 Testing with a second flame . . . . . . . . . . . . . . . . 85
4
4.2 Deterministic Simulation of Deferred Events . . . . . . . . . . . . 86
4.2.1 Results with LPDA . . . . . . . . . . . . . . . . . . . . . 88
4.3 Recommended Algorithm . . . . . . . . . . . . . . . . . . . . . . 89
5 Models for Particle Shape 905.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.2 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.2.1 Flame chemistry model . . . . . . . . . . . . . . . . . . . 93
5.2.2 Numerical method . . . . . . . . . . . . . . . . . . . . . 93
5.2.3 Particle shape variable . . . . . . . . . . . . . . . . . . . 94
5.2.4 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.3 Models for Lengths . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.3.1 Collision Diameter . . . . . . . . . . . . . . . . . . . . . 95
5.3.2 Radius of Curvature . . . . . . . . . . . . . . . . . . . . 99
5.4 Model Comparison . . . . . . . . . . . . . . . . . . . . . . . . . 102
5.4.1 Bulk Properties . . . . . . . . . . . . . . . . . . . . . . . 102
5.4.2 Particle Size Distributions . . . . . . . . . . . . . . . . . 108
5.4.3 Individual particle behaviour . . . . . . . . . . . . . . . . 111
5.4.4 Future Experimental Validation . . . . . . . . . . . . . . 116
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
6 Explicit Statistical Weights 1186.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.2 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 121
6.3 Weighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.3.1 Dynamics of the New Measure . . . . . . . . . . . . . . . 123
6.3.2 Simulation Algorithms . . . . . . . . . . . . . . . . . . . 126
6.4 Numerical Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.4.1 Initial Validation . . . . . . . . . . . . . . . . . . . . . . 127
6.4.2 LPDA and real flames . . . . . . . . . . . . . . . . . . . 129
6.4.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4.4 Further Comparison . . . . . . . . . . . . . . . . . . . . 136
5
6.4.5 Potential Applications . . . . . . . . . . . . . . . . . . . 138
6.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7 Conclusion 140
6
Chapter 1
Introduction
Soot is generally regarded as a pollutant. There are many reasons for this; one is
the way that it collects as a black or grey deposit on buildings, an issue discussed
in [25]. This discolouring is especially noticeable on stonework that dates back to
times when coal fires were in widespread use and which has not been shotblasted
since. However, the adverse effects of soot particles on those who inhale them,
which are a cause of concern on a continental scale [3], are more serious. There
is evidence of a causal link between soot inhalation and respiratory disorders [46]
suggesting that epidemiological observations are more than correlations. Arden
Pope III and Dockery [10] undertook an extensive consideration of statistical data
reported by many authors, which broadly supports the existence of causal links.
They also give some indication of the rather limited and imprecise nature of the
current understanding of the health effects of soot on populations, a subject previ-
ously addressed by Mauderly [124], and which must temper political discussion
of the topic. Interestingly, soot is also reported to have the beneficial effect of
absorbing unwanted chemicals from the environment [96], rather like activated
carbon.
Carbon black, which is basically soot produced deliberately in controlled con-
ditions, is widely used as a filler to give rubber better mechanical properties
[56, 115], for example, in automotive tyres. In addition to adding it to rubber,
carbon black is used in other composite materials [210] and to make inks.
7
These roles of soot, both as an industrial feedstock and as a pollutant, gener-
ate desire for models of soot formation that have significant predictive properties.
Manufacturers of carbon black want insight into its formation, to facilitate optimi-
sation of manufacturing processes for quality of product (measured, for example,
by particle size) and production cost. Others, typically designers of internal com-
bustion engines, want to understand the causes of soot formation to guide the
design of products that minimise emissions.
There is a third motivation for modelling soot formation: that of intellectual
curiosity. Fire is ubiquitous and detailed understanding of the chemistry of hydro-
carbon combustion has only developed in the last ten to twenty years, for example
compare the work of Maas and Warnatz [112] from 1988, in which only hydrogen
was considered, to that of Hoyermann, Mauß, and Zeuch [81] from 2004 where a
C4 hydrocarbon was successfully treated. While much remains to be done to un-
derstand hydrocarbon combustion, soot has the attraction of being even less well
understood.
1.1 Experimental Investigations of Soot
The main part of this thesis will deal with computational techniques for modelling
soot formation and growth. However, science is about models that describe the
real world and engineering is about changing that reality, so brief mention will
be made of some types of experimental work, which motivate the development
of the models considered later in this thesis and which might be used to validate
computational results.
As background to the experimental work discussed below, it may be helpful
to bear in mind that there is debate over the exact nature of soot and no definitive
model of how it forms. All one can say with confidence is that
• soot is mainly carbon with a modest amount of hydrogen and traces of any
metals present during its formation.
• soot particles form during fuel rich combustion via a process that includes
the formation of polyaromatic hydrocarbon molecules.
8
• soot particles contain many large two dimensional structures, which resem-
ble graphite.
1.1.1 Collection and Examination of Soot Particles
Careful study of samples of soot particles provides significant information about
them. A very early example dates from 1901 when Hartley and Ramage [78]
analysed the metal content of soot particles and were able to distinguish between
soot formed in two different combustion environments.
Analysis of the chemicals on the surface of soot particles, by washing samples
in an organic solvent followed by chromatography [27], has shown the presence of
multi-ring aromatic compounds and interestingly hinted, nearly forty years in ad-
vance [205], at a current topic in mechanism development [219]. The same kind of
collection and washing technique is still used in work to understand the structure
of the large molecules in soot [7, 233]. Chromatography and mass spectrometry
were used for the analysis reported in [7], while Yan et al. [233] also used nuclear
magnetic resonance and analysed the insoluble soot residue with Raman spec-
troscopy to measure its structural resemblance to graphite. Recently, laser des-
orption has been coupled to mass spectrometry [23, 149] to measure the relative
abundance of organic compounds on soot particles and thus, to probe the chemi-
cal mechanisms leading to soot formation from different fuels. In this way Oktem
et al. [149] identified a significant quantity of non-aromatic species on particles.
These straight chain compounds are not part of the standard soot growth pathways
in the modelling literature so extended models, perhaps involving alkynes [144],
will be required to explain this data.
On a slightly larger scale, transmission electron microscopy (TEM1) has been
used to investigate the aggregate structure of soot particles. An outline of the
fractal analysis usually performed can be found in [101] and the procedure has
reached the point at which Tian et al. [195] could automate it. TEM analysis is
important for testing light scattering methods [168], which are mentioned below.
1In this thesis TEM will be used to refer to both the microscopy technique and, mostly in theplural form TEMs, to the micrographs thus produced.
9
Another feature of TEMs is that they show a quite irregular arrangement of the
chemical structures at the surface of soot particles, in this regard the high reso-
lution images in El-Leathy et al. [52, figure 1] seem typical of those collected in
many different situations.
TEM is not the only electron microscopy method used to image soot particles;
Scanning Electron Microscopy (SEM) was used by di Stasio [37] to help validate
a light scattering technique. Atomic force microscopy has also been used to pro-
duce very finely resolved measurements of the size distribution of young particles
collected by thermophoretic deposition, before the aggregate structures usually
analysed by TEM had formed [18].
These two classes of data indicate opportunities for model development and
validation in the areas of soot precursor and surface chemistry, and particle struc-
ture, collision and transport. Models that explain either class of data will require
multivariate descriptions of soot particles. This thesis will present the develop-
ment of an efficient numerical technique for such models and look at an applica-
tion to the modelling of the aggregate structure of soot. Surface chemistry mea-
surements are generally more recent than those of particle configuration; mod-
elling of surface chemistry is a work that is just getting under way and will not be
addressed in any detail in this thesis.
1.1.2 In-Situ Light Scattering and Absorption
An early example of the use of light scattering to estimate particle sizes and to
infer a multi-modal particle size distribution can be found in Erickson et al. [53].
The quantitative analysis of the observed data depends on the uncertain value used
for the complex refractive index of soot. This problem is discussed in [53] and
later in work on the fractal structure of soot particles [187]; it is still the subject of
research [13]. To avoid this problem, a light scattering method to measure the size
of the primary particles or sub-units in soot aggregates, which does not depend on
the complex refractive index of the soot, has been suggested by di Stasio [37]. A
detailed review of fractal structure investigation by light scattering is presented in
[185].
10
Fractal structure light scattering measurements are usually accompanied by
soot volume fraction measurements based on the amount of light absorbed [70,
234] since this is a rather important quantity. These measurements again have
the disadvantage of depending on the somewhat uncertain refractive index of the
soot being measured. This is a significant issue because comparisons of predicted
and measured soot volume fractions are a standard model validation technique.
Different optical techniques can give very similar results using a common value
for the refractive index [33], but other values lead to inconsistencies. Values from
supposedly precise laser measurements must therefore be treated with care.
Laser Induced Incandescence (LII) [129, 161] detects black-body radiation
emitted by soot particles heated by an incident laser beam; soot volume frac-
tion can be inferred from the initial intensity of the incandescence and particle
size from its decay. Because it uses re-emission LII can be used as a point sens-
ing technique for spatially resolved data [161]. However LII systems have to be
calibrated, for example, by comparison to an absorption method. A technique
has been presented by de Iuliis et al. [32], which takes measurements from two
incident wavelengths to avoid the need for calibration. This introduces some de-
pendence on the refractive index, but only on the ratio of the index at the two
wavelengths used and one should note that many papers assume a constant value
for all wavelengths. An uncertainty of 13% is quoted for the soot volume fraction
measurements reported in [32]. The laser excitation stage of LII can also excite
fluorescence in large soot precursor molecules [31] offering some extra informa-
tion on the problematic soot inception process.
These optical techniques allow for quite large amounts of data on the bulk
properties of soot to be collected at a range of locations in a combustion system.
This means that models of soot particle population dynamics may be tested by
comparing to a set of data points from the same system. Extremely high numerical
precision is superfluous and this thesis will show ways to simulate the predictions
of a soot model at many points with modest computational resources, provided
one is willing to accept a small amount of numerical imprecision.
11
1.1.3 Other scattering
Naturally, wave scattering experiments have not been confined to visible or near-
visible light. Small Angle X-ray Scattering (SAXS) has been used to detect very
small soot or precursor particles (< 3 nm) in diffusion flames and to draw in-
ferences about the shape of some of these particles [40, 79]. Similar results are
reported for laminar premixed flames by Gardner et al. [63]. SAXS is a line of
sight technique, but it is also possible to look at x-rays which are scattered much
further from their initial direction. Detection of radiation scattered by more than
15–20 is known as Wide Angle X-ray Scattering (WAXS) [151]. A comparison
of WAXS and SAXS is given by Ossler and Larsson [152] in their introduction;
WAXS provides more information about the chemical substructure of the soot but
less about the sizes of the particles.
Moving away from electromagnetic radiation, scattering experiments have
also been performed using neutrons [132, 208]. Interpreting neutron scattering
data for soot presents a difficulty analogous to the refractive index problem for
light scattering—the carbon-hydrogen ratio [241]. However, Small Angle Neu-
tron Scattering (SANS) provides an alternative way to measure mean particle size
and soot volume fraction so that two independent methods may be compared.
1.1.4 On-line Particle Sampling
The final class of methods that will be mentioned in this brief review of exper-
imental techniques involve continuous sampling from a flame via a small tube.
The procedures for doing this require multiple dilution stages to try to quench
all reactions and stop evolution of the soot particle size distribution, see Maricq
et al. [122] and the more detailed account of Zhao et al. [239]. The gas stream
thus collected with entrained soot particles is then fed into various types of anal-
ysis equipment to determine what particles are present, generally to produce an
estimate of the Particle Size Distribution (PSD).
One way to proceed is to use Differential Mobility Analysis (DMA) [28],
which selects particles with a particular charge to mobility ratio. Diffusion of
12
small soot particles limits the precision of this technique for very small soot parti-
cles, but it has been used coupled with Condensation Particle Counting (CPC) to
measure particle size distributions down to sizes of 3 nm [122, 238]. The combi-
nation of DMA, with the selection voltage scanned to successively select particles
of different mobilities and CPC is known as Scanning Mobility Particle Sizing
(SMPS). The multiple stages of dilution involved in this process make it quite
hard to obtain absolute values of the PSD, instead Zhao et al. [238, 240] report
values scaled to make the PSD a probability distribution, however, absolute values
are reported by Stipe et al. [189]. More detailed information on the method can
be found in the introduction to [192].
DMA selects one size/mobility of particle at a time, but it is also possible
to sort a particle stream with similar electrical mobility techniques and detect
the output in all the size/mobility classes at once [192]. This recent method is
known as Differential Mobility Spectrometry (DMS), it offers a practical way
of collecting large numbers of particle size distributions from flames [54]. The
ability to collect successive spectra at intervals of less than 1 s is also interesting,
particularly for automotive applications, which have been the main use of the
technique so far [192].
A sampled and diluted gas stream may also be fed into a mass spectrometer
[72]. This method cannot detect large soot particles but gives very detailed in-
formation on the sizes of small soot particles and smaller structures containing
only tens of C atoms [180]. This is very similar to the use of mass spectrometry
referred to in §1.1.1
The techniques mentioned in this section are most interesting for their abil-
ity to measure particle size distributions. Their main limitations are problems in
relating different mobility properties to particle structure. Data collected in this
way offer great scope for testing models of particle coagulation and growth and in
particular models which can predict particle mobility as well as mass.
13
1.2 Combustion Chemistry
Having given some indication of the data that is available for testing models, it
is worth considering the main input to a soot model—the chemical environment.
The work in this thesis will concentrate on numerical methods for treating soot
particle inception and growth, that is, stages 4 and 5 of figure 1 in McEnally
et al. [125]. Consequently the results of combustion chemistry calculations will
be taken largely for granted. The mechanism for the calculations underlying parts
of this work is that of Wang and Frenklach [207]. Chemical kinetic data in [207]
was taken from experimental studies and quantum mechanical calculations. As is
inevitable for any large, new reaction scheme, values were not available for some
constants and so had to be estimated from data for similar reactions. This kinetic
model was then incorporated in soot calculations and compared to a range of
experimental data for soot volume fractions and, in some cases species concentra-
tions [8]. On the basis of comparisons with the multiple sets of experimental data
available to them, the authors adjusted some constants, within plausible ranges,
to produce a mechanism that became a widely used standard. This mechanism is
known as the ABF mechanism after the authors of the paper Appel, Bockhorn,
and Frenklach [8].
McEnally et al. [125] provide an extensive review of work that has been done
on mechanism development and some of the problems involved, in particular, that
much of this work involves under-determined optimisation problems. It is there-
fore not surprising that they give an example of two mutually exclusive but inter-
nally consistent sets of results. The general development of combustion modelling
is much too broad a topic to address here. Readers may be interested to consult
[130], which also challenges the route of Wang and Frenklach for the formation of
the first aromatic rings. Different routes for this step are discussed by Frenklach
[58]. The growth of aromatic structures seems to be the key connection between
combustion chemistry and soot formation and so will be considered a little more.
14
1.2.1 Polyaromatic Hydrocarbon Growth
Because polyaromatic hydrocarbon (PAH) molecules are the initial building
blocks for soot particles [8, 40, 207] and soot particles have a large graphitic
component [38, 149, 151], models for the growth of multiple ring structures have
attracted a lot of attention. Being able to quantitatively predict the formation of
the PAH species that are responsible for soot particle inception is very impor-
tant if soot models are to reproduce all the features of particle size distributions
[179]. Singh, Patterson, Kraft, and Wang [179] also highlight the significant con-
sequences of the model for soot particle growth by addition of new aromatic units
to the existing PAH / graphite structures in particles.
Some insight into these processes is offered by detailed experimental work
to characterise the chemical structure [152, 176] and physical shape [40] of the
components of soot particles. The first conclusion to draw from these studies is
that soot is complicated—a wide range of graphitic structures are observed, to say
nothing of the aliphatic species detected in [149]. Along with these observations
goes a large amount of analysis seeking to elucidate a detailed growth mecha-
nism, since the Hydrogen Abstraction Carbon Addition (HACA) [60] used in the
ABF mechanism cannot be a complete model, because ring closure by acetylene
addition fills in ‘bays’ on the edges of PAH molecules without generating more
such structures [176]. More detailed mechanisms involving five member rings
have been developed by Frenklach and co-workers to provide a more complete
description than simple HACA [61, 219]. They performed quantum mechanical
calculations using density functional theory to identify elementary reaction steps
and their activation energies as well as product structure. The note in [61], that
calculations were carried out on a “Xeon cluster”, confirms that a number of years
of Moore’s law2 growth in computing power will be required before these calcu-
lations can be made for individual particles in flame simulations.
A similar approach has been followed by Violi and co-workers [30, 204, 206].
2Moore’s law refers to an observation, in 1965, by Gordon Moore, the founder of Intel, thatimplies computing power on chips roughly doubles every 18 months [137]. With 40 years moredata he could still say the same thing!
15
Unlike the ABF model, they conclude [30] that acetylene surface growth, which
they also include, is not an important means for mass to enter the nanoparticle
and soot phase, because most of the soot mass is already in PAH molecules (al-
though not in soot particles) quite early in lightly sooting flames. By implement-
ing their models into a hybrid kinetic Monte Carlo-molecular dynamics simulation
[201, 203] they were able to simulate the growth of PAH molecules up to a few
thousand C atoms including details of all the atoms and bonds. Monte Carlo step-
ping was used to grow the particle according to the rates determined by the con-
figuration and chemical environment. The configuration of the molecule was then
relaxed with a few picoseconds worth of molecular dynamics simulation. These
calculations support observations of high aspect ratio soot precursor/sub-primary
particles [40]. The authors of these papers do not comment on the computational
requirements of their methods, however, the title of a related paper by some of
the same people [103] begins with the words “massively parallel”! Nevertheless,
by means of ‘coarse grained’ molecular dynamics [87], in which potentials are
matched to groups of atoms, this work has been extended to include coagulation
from an initial population of 10,000 PAH molecules each of 200 C atoms [86].
The coarse grain potentials were chosen to match atomic scale molecular dynam-
ics calculations before the simulations were started. This means that, while the
method gives some new information about the way soot particles might form, it
cannot yet be incorporated in flame simulations. The problem is that atomistic
molecular dynamics calculations would be required for every new particle that
formed, in order to calculate the potentials for use in the coarse grained part of
the simulation. When the computational demands of these calculations have been
reduced and the available computing resources increased, the present author looks
forward to seeing the natural merging of the work just discussed and the much less
detailed Monte Carlo simulations that are the main topic of this thesis.
Examples of the scale of model that is currently feasible for PAH growth in
simulations of soot formation are given in [167, 218]. Further progress should
be possible as adaptive methods [107, 165, 175], which simplify the chemical
mechanism according to the local conditions, release resources for more detailed
16
PAH modelling.
1.3 Population Balance Modelling
The above sections attempt to provide a little background to the work on soot
modelling in this thesis. A general, spatially homogeneous population balance
equation [164] is
∂
∂tf (x, t) +∇. [A (x, t) f (x, t)] = h (x, t) (1.1)
where f is the density of the particle distribution, x is the particle type and the
model is specified by A and h. Continuous changes to particles, effectively con-
vection in particle type space, are encoded in A. In certain situations there is
some cross-over between A and h. Processes which lead to small discrete change
in particle type are naturally incorporated in h by two terms; one for the death
of the original particles and one for the birth of the changed particles. However,
such processes may be approximated by a continuous change of particle type and
moved into A. This approximation is made in all the deterministic calculation
methods considered in this chapter and in chapter 4. In situations where there
is only particle growth and no coagulation Matsoukas and Lin [123] have shown
that a simple continuous approximation to discrete growth loses the diffusive / dis-
persive character of the solution to the population balance problem. The Monte
Carlo methods used in this thesis remove any need for continuous approximations
to discrete processes. The errors introduced in the alternative methods considered
in this chapter are likely to be small since the overall structure of the solution to a
population balance problem will normally be dominated by the effects of coagu-
lation when it is present.
In this thesis, unlike in Ramkrishna [164], populations are treated as homo-
geneous in physical space so terms in spatial position do not appear in (1.1). A
review of approaches to (1.1) is given in [164], which is a hard-back book, so
the next paragraphs will try to concentrate mainly on techniques that have been
17
applied to solve the population balance equations for soot and other combustion
generated particles.
1.3.1 Explicit Discretisation of Type Space
The simplest approach to the population balance problem is to reduce the number
of particle types to a manageable level, that is, to impose a finite grid on the
particle type space. The dynamics of the population size within each grid cell are
then approximated in a computationally tractable way. In some situations there
may be a natural discretisation. For the soot formation problems considered in
this thesis a natural discretisation is given by counting the number of carbon atoms
in a particle. When such a discretisation exists, one may express the population
balance problem as a countably infinite sequence of coupled ordinary differential
equations [198] (assuming spatial homogeneity), truncate this series and solve
the differential equations by standard methods while monitoring the truncation
error. This is occasionally useful as a validation technique (see chapter 2) for
other algorithms on specially constructed test problems. However, the number of
equations that have to be considered to keep the truncation error within acceptable
limits means the method is rarely useful for application to physically realistic
systems.
Fixed Sectional Methods
Methods in which a grid is placed on the particle type or state space, and an a pri-
ori assumption made about the shape of the population density within each section
or grid cell, are known as fixed sectional methods. The idea appears to have been
introduced by Gelbard and Seinfeld [65]. Hounslow et al. [80] first adapted the
equations to conserve both particle number and mass, a subject covered in more
depth in [104], where it is noted that an arbitrary number of moments could be
conserved. Conservation of a larger number of (optionally centered) moments has
been addressed recently by Alopaeus et al. [6] who demonstrate increased accu-
racy and suggest that the extra CPU time required per section can be recouped
18
by reducing the number of sections. The method was used to simulate soot under
the assumption of spherical particles in an axisymmetric laminar diffusion flame
[182] without any apparent difficulties due to the stability problems at low par-
ticle concentrations mentioned [80, 105]. The diffusion flame soot and detailed
chemistry model were solved in a fully coupled calculation, using time marching
and grid refinement to get the full spatial solution, and so fixed sectional methods
have proved fruitful tools for understanding the evolution of the gas and particu-
late phase in different flame regions [183, 184].
Gridding may also be applied to multi-dimensional particle type spaces [227]
(that is, where particles are described by more than one independent co-ordinate
such as particle mass and surface area) and was of particular relevance in the in-
vestigation of sintering in inorganic nano-particles. Simplifications to the bivari-
ate sectional technique have been developed [142] but it remains computationally
demanding. The method seems unattractive for models with a particle type of
dimension greater than two because the number of sections grows exponentially
with the number of dimensions. However, using a fixed sectional method and
some analytic expressions for the coagulation source terms, Immanuel and Doyle
III [85] were able to perform computations for a trivariate model with computa-
tion times on unspecified hardware of around 1 hour. A paper by the same authors
published [84] two years earlier (for a 1-dimensional model) reported computa-
tion times on machines with twin CPUs running at 800 MHz so, even if the same
hardware was used to produce the 1 hour figure, the demands of the trivariate case
are seen to be substantial.
A partial solution to this problem is actually presented in the original paper
of Gelbard and Seinfeld [65] by allowing additional particle properties to vary as
functions of the main property, which was directly discretised into sections. This
has the effect of restricting the support of the number density to a 1-dimensional
manifold but allowing that manifold some freedom to move over the higher di-
mensional type space. These models can be described as ‘1.5-dimensional’ al-
though the phrase ‘embedded 1-dimensional’ might be more mathematically pre-
cise. They have been successfully validated against more detailed 2-dimensional
19
sectional calculations for inorganic nanoparticles [89] and found to reduce com-
putation times by over 3 orders of magnitude. The accuracy of these ‘embedded
1-dimensional’ methods is not surprising since the distributions reported in [227–
229] are approximately ridge shaped. Embedded 1-dimensional methods have
been used to provide a computationally efficient representation of particle shape
during the development of improved soot chemistry models [216, 218]. They are
well suited to this application, because low computational cost is needed to leave
resources free for the chemistry solver, and information on the amount of particle
surface available for chemical reactions is important for physical accuracy.
Modelling the data from the experimental methods mentioned in §1.1 will re-
quire multivariate descriptions of soot particles. As mentioned previously, the
applicability of sectional methods to multi-dimensional situations is rather lim-
ited, because, in the absence of special problem structure, the number of sections
grows exponentially in the number of variables. In addition to this, fixed sectional
methods also face problems with numerical diffusion because small increments
of mass growth on particle surfaces are treated via the divergence term on the left
hand side of (1.1). Some progress on this issue has been made [155] and used to
model soot structure in plug flow reactors [156]. The linear approximation to the
distribution within each section suggested in [226] might also be expected to help
in this regard. A review with extensive numerical comparisons between fixed sec-
tional methods is given by Vanni [198] who recommends the form of the method
described by Kumar and Ramkrishna [104]. Fixed sectional methods, at least in
their more basic forms, are a special case of the finite element methods considered
below [84].
Moving Sectional Methods
In the simple case of a plug flow reactor (nothing more complex will be studied in
this thesis) the sectional model can be developed further to address the problem of
numerical diffusion between sections. This arises, as mentioned above, because
surface growth causes small changes in size to all particles within a section. If the
section boundaries are fixed, some particles will therefore cross into neighbouring
20
sections. However, without knowing the distribution of the particles within the
sections it is not possible to account for this accurately.
The solution proposed in [105] is to move the section boundaries to match the
deterministic size changes due to surface reactions. The method, with some addi-
tional development, has been applied to TiO2 synthesis [197] where it was used
to explore particle growth pathways and the transition from coalescent to aggre-
gate particle development, both current issues in soot research. Wen et al. [215]
demonstrate the use of a moving sectional technique with an approximation for
particle shape, coupled to a chemical kinetics solver for plug flow reactors. This
raises the possibility of the first fully coupled premixed laminar flame simulations,
neglecting streamwise diffusion, that include arbitrary soot particle size distribu-
tions. However, as shown in a follow-up paper, in which the authors compare
moving and fixed sectional techniques for a plug flow reactor, the computational
demands are considerable—over 12 hours for their recommended method on a
machine with a 1.8 GHz Intel Xeon CPU [217].
The above work assumes, that within each section, the particle distribution is
concentrated at a ‘pivot’ point [105]. Analogously to the work of Wynn [226]
for the fixed sectional case, finite elements may be used to model the distribution
within sections instead of delta-functions. This allows for more accurate calcu-
lation of the source terms for coagulation and any other effects, which cannot be
represented as continuous particle growth. Hu et al. [82] used cubic spline ele-
ments, and this seems to address the dispersion issues that can arise due to the
artificial distribution of coagulation products between sections [105]. The cost of
this extension is that the inter-sectional coagulation rates have to be calculated by
a more complex quadrature method rather than with the simple formulæ of the
pivot case.
One alternative to the traditional quadrature formula is to use Monte Carlo
methods, which exploit the stochastic nature of coagulation, for the coagulation
integrals. Sun et al. [191] implement this approach with a time-drive Monte Carlo
method but do not consider general distributions within sections.
21
Sectional Methods and CFD
Fixed sectional methods are well suited to incorporation in Computational Fluid
Dynamics (CFD) calculations [156, 160] because transport simply moves par-
ticles between identical sections at different spatial locations. Moving sections
are more troublesome because the correspondence between particle sections in
adjacent fluid cells is not known. The work of Pyykonen and Jokiniemi [160] ad-
dresses this problem by mapping between the two sectional approaches: they used
a moving sectional technique to simulate growth processes in Lagrangian stream
tubes, but transformed their population onto fixed sections to include cross stream
tube transport (and coagulation) at intervals of a few steps. The same approach
is used in aerosol modelling for air quality purposes [98] where it is attributed to
other papers published at the same time as [160].
Examples of the use of fixed sectional techniques to simulate flame aerosols in
a laminar flow field include [237] and, more recently, the study of unwanted par-
ticle formation during a continuous vapour deposition process similar to that used
in computer chip manufacture [190]. Turbulent flow fields require more com-
putational power. For example, Miller and Garrick [131] coupled only 10 fixed
sections to DNS of a planar jet (Reynolds number 4000), but their computation
took 105 CPU hours, which they spread over 250 CPUs in a parallel computa-
tion. In their conclusion Miller and Garrick suggest that using turbulence models
should bring calculation times below 1 CPU day. This is of limited practical use in
the short term, but advances in algorithms and computer power should eventually
enable coupling to turbulent systems with sectional methods that produce detailed
particle size distributions. In the view of the present author, other particle based
techniques are likely to enable this goal to be reached more quickly.
Finite Differences
The sectional methods discussed above arise from the structure of the problem as
a population balance. Methods derived for general differential equation problems
are also applied, for example finite differences. The author is not aware of any
papers in which finite differences have been used for soot population modelling,
22
but good agreement between computations and measurements are reported for
the growth of silver bromide crystals in [143]. Some authors report problems
with the stability of the method [113, 221], but Braatz and co-workers seem to
have found the method tractable for crystallisation problems [75] with splitting to
allow for a bivariate population balance problem. Coupling to CFD is reported by
the same group [222] suggesting there are no major numerical problems, although
quantitative comparisons to experimental data have not yet been published!
Finite Elements
Moving further into the standard tool kit of differential equation techniques, one
comes to finite elements. The principle of these methods is to project the solu-
tion within a grid cell onto a pre-determined finite dimensional vector space of
functions with finite support [174]. The projection will not, in general, exactly
solve the population balance problem and the projection is chosen to minimise
the weighted integral of the difference or residual. There is thus a large overlap
between finite element techniques and the method of weighted residuals [164],
although neither class of methods contains all examples of the other. In particular
weighted residuals can be used with ‘infinite elements’, such as Hermite [77] and
Laguerre [83] functions to give spectral methods for the distribution, these will be
mentioned briefly in the section on integral methods below §1.3.2.
Collocation methods use Dirac delta functions as the weighting for the resid-
ual integral and require that the residual be 0 at a particular set of points within
each interval. Gelbard and Seinfeld [64] used collocation with cubic basis func-
tions to solve problems for which analytic solutions were available for compari-
son. Because collocation is only concerned with the residue at a discrete set of
points much less quadrature is required for the coagulation source terms than other
weighted residual methods [64, 145]. However, there are reports that collocation
methods are less stable than Galerkin methods, both when solving steady state
[146] and dynamic [113] problems.
Galerkin methods use the basis functions themselves as the weights with
which to integrate the residual. As mentioned above, this makes the calculation
23
of the coagulation source terms rather expensive, although the level of precision
in the calculation of these integrals is not too significant for the final results [173].
Roussos et al. [173] present results using a range of different order polynomials
as the basis functions and show quadratic basis functions perform better than cu-
bics and quartics, contradicting Mantzaris et al. [117] who recommend the use of
quartics. Non-polynomial basis functions have also been successfully used, for
example, sawtooth functions [158]. Unlike finite difference and collocation tech-
niques, Galerkin methods have been coupled to chemical kinetics and used to test
extended soot formation models [2, 9, 144].
Least square methods [42, 164] minimise the integral of the square of the
residual over the (truncated) particle domain. Dorao and Jakobsen [44] extend the
method to handle particle type, physical space and time in a unified way. They
report numerical tests but, understandably, do not have an equally sophisticated
alternative method available for comparison. The direct quadrature method of
moments discussed below should provide some opportunities for such compar-
isons in the future. Arbitrary flow fields can be incorporated in the method of
Dorao and Jakobsen [44], but have to be precalculated.
For obvious reasons many authors repeat their calculations using a range of
numerical parameters, for example polynomial order or grid spacing. At the cost
of increased computational complexity, adaptive methods [45, 225] may also be
used, as for differential equations arising in other areas of science.
Mono-Disperse Assumption
The most extreme case of discretisation is to use a single point grid, that is, to
assume a mono-disperse population. Such a method will yield very little infor-
mation about a particle population and risks major systematic error. On the other
hand it is very simple to implement, fast to compute and is sufficient for providing
estimates of particle number and total mass as part of more complicated calcula-
tions [111]. With this method on a two-dimensional particle type space, that is
by tracking total particle number, volume and surface area Tsantilis and Pratsinis
[196] are even able to model the onset of aggregation in the formation of silicon
24
nanoparticles.
The continued use of mono-disperse models, which no attempt is made to
review here, is an important reminder of what can be achieved with simplicity.
Since this thesis is not about what can be achieved with the minimum computa-
tional effort, but focuses more on what is the maximum that can be achieved with
desktop computer power it seems that a detailed review of the literature on this
topic would be beyond its scope.
1.3.2 Integral Methods
This class of methods includes the mono-disperse assumption as a degenerate
case and is also motivated by the observation that, for many purposes, the full
particle size distribution is not necessary. Instead it may be sufficient to consider
the moments, or other integrals, of the distribution. For a k variable population
balance with population density f the moments are defined by
Mi1,i2,...,ik :=
∫ (∏j
xijj
)f (x1, x2, . . . , xk) dx1dx2, . . . dxk, (1.2)
where the ij will be called the exponents of the moment. Only in very limited
situations [83] is it possible to construct a closed set of moment equations, for an
example see Terry et al. [193].
Functional Closures
This category comprises the methods in which closure is achieved by making as-
sumptions about the shape of the population distribution or the form of the depen-
dence of the moments on the exponents. One approach is to use series expansions,
for example using Hermite [77], Laguerre [20, 83] or other [116] functions. The
underlying principle of such approaches is to estimate the size distribution from a
finite number of its moments and then use the estimated distribution to calculate
additional quantities. One of the more simple applications of the technique is the
assumption of a log-normal distribution [159, 220]. The ‘moment problem’ (how
25
to estimate a distribution from a set of moments) has been extensively considered
in many settings. Some recent information may be found in [12].
Many flame simulations have simply fitted a polynomial to the moments or,
more usually, their logarithms, at the exponents for which they are known and then
interpolated and extrapolated to find moments with other exponents. Different
piece-wise polynomial interpolants have been investigated [41] and small amounts
of extrapolation were also found to work. It is also observed in the same paper
that assuming a log-normal shaped particle size distribution is equivalent to using
a quadratic interpolant on the first three moments [41].
Frenklach and Harris [59] introduced a method in which polynomials were
fitted to (the logarithms of) the moments for simulating soot aerosols in the free
molecular regime. The collision rates in the free molecular regime could not be
expressed as power laws so the authors introduced ‘grid functions’ to enable an
approximate closure of their equation system. To reduce the error associated with
extrapolation to moments with negative exponents they also introduced a moment,
µ−∞, for the single variable case they were dealing with defined by
µ−∞ = limi→−∞
Mi
xi. (1.3)
This was possible because the soot model used defined a smallest particle size, x.
Consequently, µ−∞ could be interpreted as the number of particles of this size.
Introducing µ−∞ meant moments with negative exponents could be found by in-
terpolation much more accurately than would be possible by extrapolating from
the moments with positive exponents. A small amount of extrapolation was re-
quired to reach moments with exponents greater than 3, but the method became
known as the ‘Method of Moments with Interpolative Closure’. It will be abbre-
viated MoMIC in this thesis, except when the extrapolation is to be emphasized,
when MoMIEC will be used.
To extend coagulation calculations to soot particles in the transition regime
between the free molecular and slip flow cases Kazakov and Frenklach [92] used
a harmonic mean interpolation for the moment change rates. This was based on
work in [159] and has been very widely used despite the fact that the moment
26
change rates cannot be derived from any particle-particle interaction rule. In a key
paper for the development of soot models [8] the method was further extended
to allow particle surface growth rates to depend on the mean particle size. This
introduced another point where the method driven requirement for the model to
be expressed in terms of moments led to a model that could no longer be de-
scribed in terms of particle dynamics. By fitting two parameters in the surface
growth expression, MoMIC calculations of soot volume fraction were able to get
within one order of magnitude of experimentally observed values for a range of
flames and better results were possible by optimising the parameters for individual
flames. MoMIC is reviewed in [57] and has continued to be used for soot model
development [14].
Because of its small size (less than ten scalars) the MoMIC is highly suitable
for incorporation in reacting flow calculations to account for the effects of soot.
The development described above was carried out in the context of premixed lam-
inar flames. Moody and Collins [136] used a simple chemistry model with the
method of moments for titania particles in DNS calculations and showed that in-
creased levels of mixing resulted in narrower particle size distributions. Wang
et al. [209] used the soot model and method from [8] with a k − ε turbulent flow
calculation, but found their results to be significantly affected by the radiation
models they used. An alternative to polynomial interpolation is the Weyl trans-
form fractional derivative method of Alexiadis et al. [5].
Quadrature Methods of Moments
These methods can be described in several different ways. Their name arises be-
cause the particle population is represented by a series of weighted quadrature
points which are chosen to give precise values for certain moments. This makes
them like presumed distribution shape methods, the presumed shape being a fi-
nite combination of weighted Dirac-delta functions. The quadrature points may
be chosen on the basis of the leading terms of a series expansion of the distribu-
tion in its moments [20], which is a return to the ‘functional closures’ discussed
above. Alternatively, one can view quadrature methods of moments as determinis-
27
tic weighted particle methods, although the maximum number of particles that can
be handled is limited in some situations [43, 172] by ill-conditioning problems.
There are a number of different ways to calculate the evolution of the quadrature
points: one may take a statistical perspective on representing the unknown distri-
bution [235] or a more explicitly quadrature driven approach [119, 126]. Grosch
et al. [71] include a detailed account of the development of QMoM in their in-
troduction, which readers should consult for more detailed information on the
method. Their primary purpose though is to present a generalised framework for
QMoM, from which some new variants can be derived. QMoM can also be con-
structed as a method of weighted residuals [43].
The original form of QMoM was readily extended to bivariate particle types
and a twelve point scheme found to be highly accurate, while smaller schemes
also produced reasonable results [223]. A simulation of the structure of alumina
particles in a precomputed laminar diffusion flame [172] predicted aggregate and
primary particle diameters that compared well with experimental data, showing
the method could be very useful for models of soot particle structure. The statis-
tical principal component approach can be extended to multivariate models [236]
as can the direct method [119]. The direct method (DQMoM) has been applied
to a range of complex real systems by its designers, most immediately relevant is
the fully coupled simulation of soot formation in a turbulent flame [244]. It seems
very clear that DQMoM, or possibly a slight variation of it, is suitable for use with
quite detailed soot models in reacting flow calculations, which attempt to predict
the kind of measurements outlined in §1.1. The main limitation would appear to
be in the area of reconstructing the underlying particle distributions.
1.3.3 Stochastic Particle Methods
The Direct Simulation Monte Carlo (DSMC) technique that underlies most of the
work in this thesis was first used by Bird [22]. In a short paper on the relaxation of
the momentum distribution in a rarified gas he outlines the basic idea: represent
the physical population of gas particles by a collection of computational particles,
but track only the particle property one is interested in—momentum in the original
28
application. An assumption of chaos within a homogenous volume then justifies a
random generation of collision events. Also in the 1960s, the same idea was used
to simulate collision based droplet mixing [188], for which droplet composition
and not momentum was tracked. The standard reference for the DSMC in gas
dynamics is the book by the same author [21] and its later edition. An overview
of the method and some of its applications to gas dynamics, that is solving the
Boltzmann equation, is given by Oran et al. [150].
Approaches to Particle Growth Problems
In particle formation problems the particle property of interest is mass (and pos-
sibly structure) not momentum, so DSMC means specifying collision and coag-
ulation rates as expectations based on the assumption that all particles perform
independent random walks. A comprehensive mathematical review of stochastic
approaches to coagulation problems [4] is also notable for the number of numer-
ically motivated conjectures it contains. Mathematical results about stochastic
particle algorithms in the probability literature typically say, that in an appropri-
ate limit, a sequence of Markov chains converges to one or more weak solutions
of a population balance equation [34, 48, 49]. Sequences with multiple limits
have been constructed [147] but never reported by those using the technique in
scientific rather than mathematical situations.
The convergence approach mentioned in the preceding paragraph depends on
first finding the population balance equation, generally by taking a limit of an un-
derlying stochastic model. Direct Monte Carlo simulation of particle formation
problems did not begin with this relatively sophisticated approach via stochastic
calculus; the stochastic calculus formalised what was already being done. Gille-
spie [67] derived and precisely described a continuous time Markov chain Monte
Carlo simulation algorithm for coagulation based on a model of the underlying
physical system as a stochastic process. One of the key steps is the derivation of
exponential inter-event waiting times [55], which relies on a model of the under-
lying physical process that is equivalent to a continuous time Markov chain. Other
algorithms for the simulated time step are possible [110, 181] but still depend on
29
the same stochastic model for the underlying physical system.
The stochastic approach of constructing Markov processes that solve the pop-
ulation balance equation in some limit has given rise to algorithms other than
direct simulation [35, 48, 97]. That is to say, processes have been constructed
where computational particles and events can no longer be regarded directly as
the history of a small sample of the physical system. Two examples and further
discussion are given in chapter 6.
Applications of Stochastic Particle Methods to Particle Formation
A form of direct simulation was used by Wu and Friedlander [224] and Rosner and
Yu [171] to find self preserving size distributions for a bivariate particle sintering
model. Solving for a self preserving distribution removed the need to track the
relationship between simulation steps and time. While the method could have
been used to follow the dynamics, this was not done because it was not part of
the authors’ immediate goal. After Goodson and Kraft [68] 3 addressed a key
numerical issue, Balthasar and Kraft [16] first outlined DSMC for soot formation
in laminar premixed flames and further application is found in [238]. Most of the
further development of this work is reported in this thesis along with references to
additional applications and so is not discussed further here. However, applications
to inorganic nanoparticles should be noted [138, 211, 212].
1.4 Mathematical Framework
This section provides an overview of the mathematical framework in which soot
formation and growth will be considered in this thesis. The paradigm is that:
• Individual soot particles may be completely described by elements of some
type space E on which addition, corresponding to coagulation, is defined.
• The soot population is described at time t by the number n (t, x) per cm3 of
particles of type x ∈ E.
3In [68] DSMC is used in a much narrower sense than it is used in this thesis.
30
• n evolves according to the discrete Smoluchowski coagulation equation:
d
dtn (t, x) =
(Kt (x) +
∑l∈U
S(l)t (x)
)(n (t, ·)) + I (t, x) . (1.4)
In these definitions is an implicit assumption that E is countable so that sum-
mations are meaningful. This assumption is common in the literature, but not
essential, and later in this thesis it will be removed, by replacing the sums with
integrals. The coagulation operator K, which is time dependent and non-linear, is
then defined by
Kt (x) (n (t, ·)) =1
2
∑y,z∈E:y+z=x
Kt (y, z)n (t, y)n (t, z)
−∑y∈E
Kt (x, y)n (t, x)n (t, y) .(1.5)
The first sum represents coagulation to form particles of type x, the second loss
of particles of type x due to coagulation. Kt (x, y) defines a map from the con-
centrations of particles of types x and y to their coagulation rate at time t given
by Kt (x, y)n (t, x)n (t, y). K is known as the coagulation kernel and details are
given in [92, 177].
Surface reactions which only involve one physical particle at a time are de-
scribed by the linear operator S defined by
S(l)t (x) (n (t, ·)) =
∑y∈E
βt (y)P(g(l) (y) = x
)n (t, y)− β(l)
t (x)n (t, x) . (1.6)
Where l ∈ U , an index set for the surface reactions. In the context of the soot
model used in this thesis [8, 207]
U = C2H2 addition,OH oxidation,O2 oxidation, pyrene condensation . (1.7)
β(l)t (x) is the rate at which a particle of type x undergoes surface reaction l at
time t.
31
g(l) (x) is the type of the resultant particle when a particle of type x undergoes
surface reaction l. If the reaction removes the particle from the soot population a
special value 0 ∈ E will be used so that g(l) is everywhere defined. It is possible
for g(l) to be a random function, in the deterministic case the probability in (1.6) is
to be understood as an indicator function. This is an area in which the framework
is slightly more realistic than much earlier work, for example [217], in which
surface growth is modelled as a continuous process of infinitesimal changes.
I (t, x) is the rate at which particles of type x enter the system at time t. It is
defined by the soot inception model (since no transport problems are considered).
In the context of the soot model used in this thesis [8, 207], this means I is always
zero except when x represents a particle of 32 C atoms.
Particles of less than 32 C atoms are assumed to belong to the gas phase which
is simulated separately using the 1-dimensional laminar flame code, PREMIX and
a Method of Moments approximation to allow for the removal of gas phase species
into the soot population, see [16, 177] for details. This precalculation of the gas
phase flame environment is very important because it yields the time dependence
of β, the K and I . Because soot particle diffusion is small and may be neglected,
one can then transform the problems considered in this thesis from boundary value
problems (a flame with two known ends) to initial value problems (a small parcel
of gas moving through a known flame). Two way coupling of stochastic particle
simulations to chemical reaction calculations is reported by Celnik, Patterson,
Kraft, and Wagner [26].
Throughout this thesis the initial condition of no particles will be used,
n (0, x) = 0 for all x ∈ E, since unburned gases entering a flame are assumed
not to contain soot.
1.4.1 Models for individual soot particles
Many different descriptions of soot particles are possible and provide varying lev-
els of detail. The three models most relevant to this work are set out here in de-
creasing order of detail. The molecular dynamics work of Violi and Venkatnathan
[202] and Violi [201] mentioned above would represent a higher level of detail,
32
as would coarse grained approximations to that work [86]. For a brief discussion
see §1.2.1.
Primary particle model
HereE is the subset of(E × R3
)Nwith only finitely many non-zero components.
A soot particle is described as a sequence (order is not generally significant) of
‘primary particles’ each described by an element of E and some location e.g. dis-
placement from the first particle in the list. As far as the author is aware, all work
so far has modelled primary particles as spheres of constant density described by
their mass or volume and so used E = [0,∞).
Balthasar et al. [17], building on the work of Mitchell and Frenklach [134]
and Mitchell [135], modelled coagulation by assuming two particles stick rigidly
at the first point of contact and without realignment. They performed surface
growth on each primary particle by integration - with infinitesimal steps—losing
the discrete nature of the surface events. The computational effort required to
simulate this model is great because of the numerical calculations required to find
collision radii for the aggregate particles.
Surface and Volume Model
This model takes E = R+ × R+ and says that a particle is described by x ∈ E if,
and only if, it has volume x1 and surface area x2. Addition is defined on E by
z = x+ y ⇐⇒ z1 = x1 + y1 and z2 = x2 + y2
and hence coagulation is modelled without any coalescence. This model is consid-
ered in chapter 5. It is an intermediate stage between the Primary Particle Model
and the Coalescent Sphere Model. The definition of the g(l) is non-trivial even
with a clear picture of the underlying chemical processes. As discussed in §1.3,
one can use an infinitesimal surface growth approximation [17, 135]. However,
even infinitesimal growth approximations do not avoid the need for a physical
model of the change of particle shape during surface growth and so some possible
33
models are considered in chapter 5.
Coalescent Sphere Model
The simplest and standard case is when all particles are assumed to be spheres
with a common density. For this take E = R+ and let x be the mass or the
volume of the particle it describes. It is then convenient in the case of soot particles
(provided H and other species may be neglected) to take E = N where x ∈ E is
the number of C atoms in a particle it describes. Addition is defined just as for the
natural numbers: the result of the coagulation of two spheres is a new sphere with
volume equal to the sum of the volumes of the initial spheres, that is, coagulation
is completely coalescent. Except for chapter 5 the coalescent sphere model for
soot particles is used in this thesis.
Different coagulation kernels are appropriate in different conditions. The co-
agulation kernels are also used to model the pyrene condensation and soot incep-
tion processes, for details see chapter 2 and [177]. All the work reported in this
thesis was carried out using the transition regime kernel defined in equation (2.1),
the use of more specific kernels was not found to affect the results.
34
Chapter 2
Simulation at Higher Pressures
In the first paper on DSMC methods (also known as the Direct Simulation Al-
gorithm, DSA) for simulating soot particle populations in flames Balthasar and
Kraft [16] only considered flames at atmospheric pressure. This was so that the
Knudsen number (the mean free path divided by the diameter) of the soot parti-
cles would satisfy Kn > 10, that is, the particles would be small compared to the
mean free path. In this chapter a physical model for coagulation at lower Knudsen
numbers, taken from published literature, is implemented within the DSA. Inclu-
sion of the low Knudsen number model in the algorithm required a generalisation
of the stochastic approach used in the original paper [16], and that generalisation
is presented and tested here. The result of the work reported in this chapter was
to establish a numerical method and a particular computer implementation of it,
which could then be used as the starting point for the work in the rest of the thesis.
The main reason for the use of DSMC methods in the simulation of soot
growth was to handle the non-linear coagulation term defined in (1.5), which
represents the process by which soot particles collide and stick together. As in
the initial paper [16], coagulation is treated as completely coalescent throughout
this thesis with the exception of chapter 5. This means that when two spherical
particles coagulate all the material from both incoming particles is assumed to be
converted into one new spherical particle. Coagulation is sometimes used to refer
to a non-coalescent process, in which the incoming particles initially remain in
35
point contact; this is the focus of chapter 5.
2.1 Coagulation Kernel
In the physical model used in [16, 92] the probability of coagulation between two
particles in a unit of time is proportional to the coagulation kernel (a symmetric
function of both particles). In [92] the standard coagulation kernels, Kfm for
the free molecular regime1 (Kn > 10) and Ksf for the slip flow and continuum
regimes2 (Kn < 1) are used and the Fuchs coagulation kernel [62], which is
applicable at all Knudsen numbers, is quoted. In the same paper it is shown that
the Fuchs kernel differs only slightly from one half3 of the harmonic mean of
the slip flow and free molecular regime kernels. The Fuchs and harmonic mean
kernels are considered in more detail in [153] where both are shown to be part of
a more general family of ‘flux matching’ kernels.
The calculations in [92] use the Method of Moments with Interpolative Clo-
sure (MoMIC) [57], so neither the harmonic mean nor the Fuchs kernel can be
used directly because of their relatively complicated form. In [92] Kazakov and
Frenklach adapt the harmonic mean idea to their very fast MoMIC numerical tech-
nique by calculating two coagulation rates, one with the free molecular kernel and
one with the slip flow kernel and taking the harmonic mean of these two rates.
This is different from using the harmonic mean kernel, but they found it worked
well. An important reason for this success is that, as can be seen in the figures
of [92], the free molecular and slip flow kernels become very large outside their
domains of applicability. In a harmonic mean a very large coagulation rate calcu-
lated from an inapplicable kernel has very little effect and so the correct behaviour
is recovered for the slip flow and free molecular regimes.
In this work the harmonic mean approach is applied to the coagulation kernels
1The form of the free molecular kernel is given in (5.4)2The form of the continuum kernel, which is very similar to that of the slip flow kernel, is given
in (5.3)3This factor of 2 is not mentioned in the text in either [92] or [62], but the equations in those
papers are correct. The factor will not be referred to again here.
36
and so a transition kernel is defined by
Ktr =
(1
Ksf+
1
Kfm
)−1
(2.1)
for all Knudsen numbers, in particular the previously inaccessible range
1 ≤ Kn ≤ 10.
2.1.1 Majorant Kernels
At the heart of the DSMC approach to simulating coagulation is the repeated se-
lection of pairs of particles xi, xj i 6= j from a computational population
x1, x2, . . . , xn, which statistically represents the physical system. The selec-
tion has to be done so that the probability of a pair being selected is
K (xi, xj)
R, R =
∑i6=j
K (xi, xj) , (2.2)
where K is the coagulation kernel and the inverse of the mean time step length is
sum R.
A naıve approach to this selection will have a run time proportional to N2
whereN is the size of the computational population. To avoid this rather crippling
time cost majorant kernels [49] can be used. A majorant is a function K ≥ K for
which the computation of
R =∑i6=j
K (xi, xj) (2.3)
and the inversion for particle selection purposes of the distribution
K (xi, xj)
R(2.4)
is relatively fast (run timeO (N) orO (N logN)). It is then possible to recover the
distribution and rate defined by K by rejecting the selection of a pair of particles
37
xi, xj with probability
1− K (xi, xj)
K (xi, xj)(2.5)
which is O (1).
Balthasar and Kraft [16], who only considered the free molecular regime, used
the majorant derived in [68] and which will be referred to as Kfm. The very sim-
ple form of Ksf means no majorant is needed for the slip flow regime. However,
the slightly opaque form for the transition regime coagulation kernel given by
(2.1) makes it difficult to construct an efficient majorant for the transition regime.
In the problem considered in [68] the kernel could be factorised into two sep-
arate parts for which tight upper bounds were obtainable. No such ‘divide and
conquer’ strategy seems possible for the transition kernel defined in (2.1) so a
more general mathematical approach was adopted. The new approach, of which
majorant kernels are a special case, will be termed ‘majorant rates’ since it is de-
rived by considering the overall coagulation rates rather than the kernel which is
evaluated for pairs of particles.
Define
Rsf =∑i6=j
Ksf (xi, xj) , Rfm =
∑i6=j
Kfm (xi, xj) , (2.6)
and note that for any a, b > 0,(1
a+
1
b
)−1
≤ min (a, b) (2.7)
and so using (2.1),
Ktr (xi, xj) ≤ min(Ksf (xi, xj) , K
fm (xi, xj)). (2.8)
Since Kfm ≤ Kfm the latter may replace the former in (2.8) to give
Ktr (xi, xj) ≤ min(Ksf (xi, xj) , K
fm (xi, xj)), (2.9)
38
hence ∑i6=j
Ktr (xi, xj) ≤∑i6=j
min(Ksf (xi, xj) , K
fm (xi, xj))
≤ min
(∑i6=j
Ksf (xi, xj) ,∑i6=j
Kfm (xi, xj)
)= min
(Rsf , Rfm
).
(2.10)
Therefore, it is convenient to define the ‘majorant rate’
Rtr = min(Rsf , Rfm
). (2.11)
This is the rate at which potential coagulation events are generated and, because
it is the maximum of two quantities, which are easy to calculate using basic ma-
jorant kernels, it is suitable for use in speed critical parts of simulations. With
this majorant rate the preliminary selection of a particle pair is according to the
distribution
Ksf (xi, xj)
Rtrwhen Rtr = Rsf and
Kfm (xi, xj)
Rtrotherwise. (2.12)
In either case the selection only depends on Ksf and Kfm, to which standard
majorant kernel techniques can be applied, see [68] for details.
The preliminary selection is accepted with probability
Ktr (xi, xj)
Ksf (xi, xj)when Rtr = Rsf and
Ktr (xi, xj)
Kfm (xi, xj)otherwise (2.13)
and since Ktr only has to be evaluated for a single pair of particles the computa-
tional cost is not significant.
The case when the preliminary selection is rejected is known as a fictitious
jump (as referred to in [16])—time is advanced but no change is made to the com-
putational population. With this extension of the majorant concept the algorithm
set out in [16] is applicable at the higher pressures considered here. Neither ma-
39
jorant kernels nor the extended concept of majorant rates introduced here involve
any approximation in the solution of the model system being solved. The results
of Eibeck and Wagner [51] show, that subject to limits associated with machine
precision, any desired precision may be achieved by an appropriate choice of nu-
merical parameters. For very high precisions the appropriate choice of numerical
parameters will lead to very large demands, but they will be finite.
An implementation of these techniques using the Fortran 90 programming lan-
guage is available for download and use under the Gnu General Public Licence
[29].
2.1.2 Validation
A range of problems were devised for which direct solution using an ODE solver
was possible (see §1.3.1). These problems were used to test the implementation
of the stochastic method by comparing its solutions with those obtained from
the direct ODE solver. Results are presented for two of these test problems—
a simulation of pure coagulation from a monodisperse initial particle population
(6.56×1011 particles per cm3 of diameter 0.88 nm) in air and one with no particles
at time 0, but with 0.88 nm particles being incepted at a rate per unit volume of
2.63× 1013 ×(
0.05− t0.05
)2
cm−3s−1
at time t. The other details of the test problems were:
• Particles coalesced immediately to form spheres upon collision;
• Particle density of 1.8 g cm−3;
• Constant temperature and pressure of 500 K and 600 bar respectively;
• Approximate range of Knudsen numbers: 5–20;
• No chemical reactions on particle surfaces considered.
40
105
106
107
108
109
1010
1011
0 1 2 3 4 5
Coagulation only
Stochastic 0.02 sLSODE 0.02 sStochastic 0.05 sLSODE 0.05 s
num
ber
dens
ity /
cm-3
particle diameter / nm
105
106
107
108
109
1010
1011
1012
0 1 2 3 4
Nucleation and coagulation
Stochastic 0.02 sLSODE 0.02 sStochastic 0.05 sLSODE 0.05 s
num
ber
dens
ity /
cm-3
particle diameter / nm
Figure 2.1: Size distributions calculated with ODE solver and stochastically.
41
Figure 2.1 shows a very encouraging degree of agreement between the re-
sults produced by the stochastic code using the method outlined above and direct
solutions calculated with the Livermore ODE solver (LSODE) [163]. Since the
number of particles in each size class is predicted correctly this is strong numer-
ical evidence for the validity of the new majorant approach and indeed the com-
plete simulation algorithm. The mathematical properties of the simulation results
are unaffected by the extension of the majorant approach. While majorants are
practically essential, they are no more than computational conveniences and the
simulated coagulation rate does not depend on the majorant.
ODE solvers such as LSODE cannot be used to obtain particle distributions
such as those plotted in figure 2.1 for real flames because they have run times that
scale quadratically with the number of particle sizes considered. In the test prob-
lems shown the number of particle sizes was restricted to 2000, for real flames
millions of different sizes would have to be considered and run times on the desk-
top computers (CPU speeds around 2 GHz) available for this study would have
been measured in years!
2.2 Application to Flames
Results for two 10 bar laminar, premixed ethylene flames JW10.68 and JW10.60
[1, 93] are shown in figure 2.24 along with results from the MoMIC calculations
reported in [8] and experimental data for the soot volume fraction fv also from
[8]. The same data for the 1 bar laminar, premixed ethylene flame XSF1.78 [230]
is also shown. All three sets of stochastic calculations made use of the harmonic
mean kernel (2.1) and the majorant approach of §2.1.1. The data in figure 2.2
shows the stochastic algorithm produced values within a factor of 2 of those gen-
erated by the established MoMIC technique. The soot model parameters used in
the simulations are the values reported in [8] as optimal (for the MoMIC).
From figure 2.2 one sees that the physical model leads to results that differ no-
ticeably from the observed data and that this is true whichever numerical method
4Figure 2.2 is courtesy of Dr J Singh
42
109
1010
1011
10-9
10-8
10-7
0 0.05 0.1 0.15
JW 10.60
Stochastic
MoM
Experimental
num
ber
dens
ity /
cm
-3 soot volume fraction
time / s
109
1010
1011
1012
10-9
10-8
10-7
10-6
0 0.05 0.1 0.15
JW 10.68
Stochastic
MoM
Experimental
num
ber
dens
ity /
cm-3 soot volum
e fraction
time / s
109
1010
1011
10-9
10-8
10-7
0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1
XSF 1.78
Stochastic
Experimental
MoM
num
ber
dens
ity /
cm-3 soot volum
e fraction
time / s
Figure 2.2: Comparison of calculations and observations for flames.
43
is used for the calculations. This deviation illustrates very clearly the limitations
of the current understanding of soot formation mechanisms. As discussed in the
introduction, kinetic data for gas phase reactions are not precise and the processes
on the surface of soot particles are only just beginning to be modelled in detail. In
particular, the surface activity of soot particles is known to decrease as particles
move through flames, but no physical model is available and some sort of corre-
lation with size [8] or age [178] has to be used instead. Also, while the MoMIC
and the stochastic data are qualitatively the same and close on a log scale, there
are some differences. Some divergence was to be expected because DSA requires
rates to be expressed as functions of the properties of individual particles, whereas
MoMIC ultimately requires all rates to be expressed in terms of the lower integer
order moments of the particle mass distribution, and these two requirements are
not entirely compatible.
Computationally MoMIC is extremely cheap and only the very recent emer-
gence, after this work was completed, of the DQMoM (see §1.3.2) has raised
questions about its position as the method of choice for incorporation in larger
simulations. However MoMIC, by its nature, cannot produce the full size distri-
butions of particle populations such as are plotted in figure 2.1, and which are
likely to be of considerable interest to manufacturers of carbon black and to those
seeking to understand pollution from diesel engines.
The scope for improvements in the physical model of the soot growth pro-
cess is also clear and the stochastic Direct Simulation techniques outlined here
and by Balthasar and Kraft [16] (the paper this work extends) offer considerable
advantages for those working on such models. The most important of these is
that arbitrarily precise solutions of the model equations are possible [51]. In ad-
dition the convergence result in [51] shows that by varying the parameters of the
numerical method (not the physical model) one can estimate the numerical error
and then control it. Secondly the representation of the particle distribution by a
sample of particles means that simulation of processes at the individual particle
level is possible. Interactions between particles and the surrounding system may
be simulated using the kinetics for those particular particles rather than average
44
quantities as in moment based methods, see for example [178]. Within the DSA
framework extremely complex descriptions of the internal particle structure can
also be used e.g. [17]. However, this power of the DSMC framework comes at a
cost which can be very noticeable—if a flame has many physical events involving
soot particles then a simulation of the flame will be slow. Details of the problem
and strategies for dealing with it form a considerable part of this thesis, but first
some more detail about the basic computational implementation is given.
2.3 Implementation of DSA
The key program features introduced here are used in all the Monte Carlo simula-
tions reported in this thesis. Some detail about the binary tree is given because of
differences from the implementation outlined by Wells and Kraft [213].
2.3.1 Variation in Number of Particles
In a direct simulation on a computer of a chemical system where the number of
computational particles may increase and decrease, one has to bound the number
of these particles being tracked. In soot formation problems, decreases in parti-
cle count are mainly due to coagulation. Increases in particle count are the result
of particle inception. One approach to controlling the number of computational
particles is given by Smith and Matsoukas [181], in which a constant number is
maintained by resampling the population every time the number of computational
particles changes. Alternatively one can allow the number of computational parti-
cles to drop below 50% of the program capacity and then duplicate all the particles
[110, 114]. A discussion of this issue may be found in the introduction of Zheng
[243].
Particle doubling has the attraction of preserving all the statistical properties
of the computational population, which is not generally possible when resampling
multiplies the number of computational particles by a non-integer factor. During
the work reported here a constant computational particle number algorithm was
tried and found to have the expected, accurate mean behaviour. However, the
45
constant computational particle number method led to a much greater statistical
uncertainty (by around 1 order of magnitude in the first few moments of the sim-
ulated distribution) in the results compared with using particle doubling. This is
consistent with the results of Maisels et al. [114], therefore only particle doubling
was used for the remainder of the work. In the case where the computational
particle number rises beyond the upper limit set for a computation any rescaling
has to be done by a non-integer factor (since it must be strictly between 0 and 1).
For handling this case it is convenient to resample as soon as the computational
particle limit is exceeded by one since there are no benefits in allowing a larger
increase.
2.3.2 Binary Tree
The basic algorithm for the simulation of particle formation and growth is de-
scribed in [68, 177]. Unlike in those papers, a binary tree method was used here
for particle selection according to the distributions defined in (2.12), as this en-
abled the use of more complex soot models.
The ensemble of stochastic particles was stored in a list connected to a binary
tree. The tree provided a convenient method for selecting particles according
to arbitrary probability distributions. In an earlier version, described briefly by
Wells and Kraft [213], accumulated error problems had to be explicitly addressed.
The version presented here avoids these problems at the cost of using double the
amount of memory, but since the program has only modest memory requirements
this does not cause any difficulties.
Figure 2.3 shows a binary tree along with the numbering scheme used for the
nodes. Each node is shown as a pair of adjoining boxes and, except for nodes on
the bottom level, each has two ‘children’ beneath it to which it is joined by lines.
The figure should also make clear the concept of levels. The tree shown has 4—
the maximum number of nodes on the path from the top of the tree (also called
the root) to any point on the bottom (called a leaf). In this section the number of
levels in a tree will be denoted by l. The numbering scheme is not fundamental
to the operation of the tree but having some simple scheme is necessary for im-
46
1F 1
,left
F 1,rig
ht
3F 3
,left
F 3,rig
ht
2F 2
,left
F 2,rig
ht 5
F 5,l
F 5,r
11
F 11
,lF 1
1,r
10
F 10
,lF 1
0,r
4F 4
,lF 4
,r
9F 9
,lF 9
,r
8F 8
,lF 8
,r
7F 7
,lF 7
,r
15
F 15
,lF 1
5,r
14
F 14
,lF 1
4,r
6F 6
,lF 6
,r
13
F 13
,lF 1
3,r
12
F 12
,lF 1
2,r
Tre
e
f 1 1
f 2 2
f 3 3
f 4 4
f 5 5
f 6 6
f 7 7
f 8 8
f 9 9
f 10
10
f 11
11
f 12
12
f 13
13
f 14
14
f 15
15
f 16
16
List
Figure 2.3: 4 level binary tree
47
plementation purposes. The advantage of this scheme is that for a node numbered
i > 1 it is easy to calculate the number of its parent as bi/2c, the integer part of
i/2. The children (if they exist) of a node numbered i are therefore numbered 2i
and 2i+ 1.
Suppose one has a tree with l > 1 and a list of less than 2l stochastic particles
numbered 1, 2, ...,m and associated with the particle numbered i is a weight fi.
Suppose further one wishes to select a particle at random from the list in such a
way that the probability of selecting particle i is
fi∑mi=1 fi
. (2.14)
The tree, as shown in figure 2.3, should be initialised from the bottom up in the
following way, starting with the bottom level (nodes numbered 2(l−1) to 2l − 1).
F2(l−1)+b i−12c,left =fi, i ≤ m, i odd
F2(l−1)+b i−12c,right =fi, i ≤ m, i even
F2(l−1)+b i−12c,left =0 m < i ≤ 2l, i odd
F2(l−1)+b i−12c,right =0 m < i ≤ 2l, i even
and then working upwards level by level setting
Fj,left =F2j,left + F2j,right
Fj,right =F2j+1,left + F2j+1,right.
A particle i is said to be under a leaf node j and to its left if fi is assigned to Fj,left
in the above procedure; ‘under to the right’ is to be understood analogously.
A tree constructed in this way has the property that, for j < 2l−1, Fj,left
and Fj,right are the relative probabilities according to the distribution in (2.14) of
selecting a particle under one of the descendants of j via its left or right hand child
48
respectively. The algorithm for particle selection is therefore simple:
1. j ← 1
2. Generate a U (0, Fj,left + Fj,right) random variable, r.
3. If node j is a leaf, break out of the loop by going to stage 6
Else continue
4. If r < Fj,left choose the left child and so
j ← 2j
Else choose the right child so that
j ← 2j + 1
5. Loop back to stage 2
6. If r < Fj,left select the particle under node j and to the left
Else select the particle under node j and to the right
Generating one random number is sufficient; if at stage 4 one selects the right
hand branch one can simply update r by r ← r − Fj,left before updating j and
use the new value of r in place of the random variable that would be generated in
stage 2.
Obviously having to completely rebuild the tree each time one wants to select
a particle would make this a very inefficient algorithm. However, once a tree
has been initialised, if one of the fi changes or an extra particle is added without
exceeding the capacity of the tree (2l particles), the tree can be updated simply by
recalculating the values of F on the path from the leaf under which the change has
occurred to the root. It is generally necessary to maintain binary trees for several
different distributions simultaneously. This was done by using vector valued f
and F .
2.3.3 Complexity of the implementation
For a binary tree with l levels (capacity 2l particles) each event requires O (l)
operations. One has to descend the tree to select a particle then ascend updating
49
with the results of the event.
The number of events scale linearly with the sample volume, V , as do the
number of particles. Hence for reasonable choices of V and l (those for which the
full capacity of the tree is used) a simulation will need a tree withO (log V ) levels
and will have a run time that is O (V log V ).
The performance of a simulation program, which used a binary tree as de-
scribed above, was measured for different sample volumes and computational
particle numbers. In figure 2.4 run times can be seen to scale as n log n in the
Table 2.1: Scaling of run times with tree depth
log2 Max Particles run time / s12 11313 24014 54415 122416 273017 5724
maximum number of computational particles and, since the sample volumes were
chosen to be proportional to n, these results are consistent with the complexity
derived above.
2.3.4 DSA Applied to Flames
This section focuses on computations for two premixed laminar ethylene flames
studied in [93]—JW1.69 and JW10.68. The first is a 1 bar5 flame with C/O ratio
0.69 and a peak soot volume fraction of approximately 2 × 10−8, the second a
10 bar flame with C/O ratio 0.68 and a peak soot volume fraction of approximately
2× 10−6. These soot volume fractions are at the extremes of the range studied in
[93].5All pressures reported in this thesis are absolute pressures.
50
Simulations were performed using different numbers of computational parti-
cles and varying numbers of repetitions. These gave some idea of the (generally
small) systematic error in moments 0 to 3 of the final size distribution due to using
a low number of computational particles and the number of repetitions needed to
control the statistical noise. Some sample results are given in table 2.2. The initial
sample volume used in the simulations is denoted by “vol”, “m2” refers to the
second moment of the mass distribution, the uncertainty is the half width, using a
central limit theorem estimate, of the 95% confidence interval for the mean of the
distribution being sampled (that of m2).
Table 2.2: Illustrative results
flame vol / cm3 treerepetitions
m2 timedepth uncertainty / min
JW1.69 3.5× 10−8 13 10 2.3% 50JW1.69 4.375× 10−7 10 30 6.8% 16.5
JW10.68 2.5× 10−7 11 20 1.5% 4020JW10.68 1.25× 10−7 10 20 2.9% 1534
The run times for JW10.68 indicate that DSA, as implemented, is not a very
practical tool for investigating sooty flames on a desktop computer. In order to
inform attempts to reduce the run times, execution time profiling was carried out
to identify the bottlenecks in the program.
2.3.5 Profiling of DSA Simulation
Table 2.3 shows the number of each type of event performed during 10 runs of
a simulation of the premixed flame JW1.69 (see [93]) with 16 tree levels and an
initial sample volume of 2.8 × 10−7 cm3. JW1.69 was used rather than JW10.68
because profiling leads to a very large increase in run times and so was not fea-
sible for the JW10.68 case. Unlike for other surface reactions, the rate of pyrene
condensation does not have a simple dependence on particle surface area. Pyrene
condensation was therefore handled separately in the simulations so that the rel-
51
0
50
100
150
200
250
300
350
0 20 40 60 80 100 120 140
run
time
/ log
(m
ax c
ompu
tatio
nal p
artic
les)
max computational particles / 1000
Figure 2.4: Simulation run time scaling with binary tree
Table 2.3: Relative frequency of stochastic events in JW1.69
time steps 1.25× 109
surface events (not pyrene) 1.20× 109
pyrene condensation 4× 107
coagulation 5× 106
inception 4× 106
52
evant rate models could be properly implemented. The main conclusion to be
drawn from table 2.3 is that essentially all the events are surface events and that
there are not many pyrene events compared to the other events. One may also note
that coagulation events are even more rare than pyrene events.
Figure 2.5 shows how this breaks down in computational time. The data were
obtained over 5 runs simulating JW1.69 with a tree depth of 12 and an initial
sample volume of 1.75 × 10−8 cm3. From figure 2.5 one sees that 64% of the
selection of event7%
other time step overheads
17%
performing surface events31%
calculating stochastic step
length40%
unclassified3%
pyrene events2%
Figure 2.5: % of execution time spent on different tasks—DSA
computational effort is consumed in calculating the random time steps. These
parts of the program are key to the simulation of a non-linear coagulation process,
but are not fundamental for the surface processes. The surface growth of a single
particle is simply a non-homogeneous Poisson process until the next time that
particle is involved in a coagulation and which, conditional on the coagulation
time, does not depend on any other particles. However, most of the time steps
53
reported in table 2.3 are concerned with the surface processes and not coagulation.
A considerable amount of the 31% of time spent on “performing surface events” is
also attributable to the binary tree structure necessary for simulating coagulation.
Alternative ways of incorporating the surface area processes into simulations are
therefore investigated in the following chapters.
54
Chapter 3
Accelerating the Surface Processes
The previous chapter outlined the extension of the DSMC method known as DSA
into a general soot simulator. This chapter and the next describe work done to
make the DSA a practical tool for use on a desktop PC. The focus in this chap-
ter is on two ways in which the natural random nature of the surface processes,
which take up so much of the computation time in the basic DSA as introduced
in chapter 2, can be preserved while reducing their computational demands. The
possibility of a deterministic approximation to these processes will be investigated
in chapter 4.
3.1 Operator Splitting
In (1.4) the evolution of the particle distribution is seen to be driven by three kinds
of process:
1. Coagulation—two particle events
2. Single particle events
(a) Surface area processes
(b) Pyrene condensation
3. Particle inception—involves no pre-existing particles
55
From the analysis in chapter 2 it is clear that most of the time taken in a simulation
is spent updating the binary tree after events under 2(a). Operator splitting [69]
is a numerical approach to solving differential equations that allows for the intro-
duction of different methods to account for the effects of single particle events.
3.1.1 Mathematical Outline of Splitting
Consider an ordinary differential equation (ODE) with a source term that can be
expressed as the sum of two operators A and B , for example,
d
dtf = (A+ B) f
f (0) =f0.
(3.1)
Assume further that (3.1) has a unique, well behaved solution. In this case the
simplest way to compute the solution over time is based on the idea that for small
h
f (t+ h)− f (t) ≈ h (A+ B) f (t) .
This is the basis of DSA where all the different processes are simulated simultan-
eously—the jump rate of every process depends on the current state of the particle
ensemble.
However, at the cost of introducing a further approximation and by assuming
that the resulting ODEs have well behaved solutions, one can define auxiliary
functions f1, f2 and advance the solution over a small time interval [t0, t0 + δt]
in two stages, using whatever numerical schemes are appropriate for each stage.
First solve
d
dtf1 =Af1
f1 (t0) =f (t0)
(3.2)
56
and then
d
dtf2 =Bf2
f2 (t0) =f1 (t0 + δt) .
(3.3)
The key point is that completely different numerical methods may be used to solve
(3.2) and (3.3).
3.1.2 Implementation of Operator Splitting
Equation (1.4) was split into two parts by handling OH oxidation and acetylene ad-
dition (sometimes also the insignificant process of O2 oxidation) separately from
all the other processes, which are simulated in exactly the same way as in the
DSA. Typically this splitting was performed over time intervals of a length such
that each particle would, on average, undergo a few surface reactions during a
splitting step. The choice of this average number was the means of determining
the length of the splitting step and thus the accuracy and computational speed of
the approximation.
The steps in advancing a simulation over times t from t0 to t1 are 1:
1. t← t0, t2 ← t0
2. WHILE(t < t1)
(a) Choose a step length 4t and set t2 = t + 4t. In this work 4t was
chosen as the time in which n × r events due to the split processes
would be expected under DSA. The number of computational parti-
cles is n and r is an arbitrary positive parameter, which is chosen to
optimise accuracy and computational speed. Various conditions may
lead to large values of4t being calculated; these must be reduced.
(b) Perform the unsplit processes including coagulation from t to t2 ex-
actly as for DSA using the binary tree.
1t2 and t3 are auxiliary times
57
(c) Extract a list of particles from the binary tree.
(d) WHILE(t < t2)
i. Choose a secondary time step length δt such that the chemical
conditions vary only slightly over the time interval [t, t+ δt] and
t+ δt ≤ t2.
ii. t3 ← t+ δt
iii. Simulate the split processes up to t3 for each particle. Methods
for doing this are discussed below.
iv. t← t3
v. Return to top of the loop—2d
(e) Put the updated computational particles back into the binary tree. (One
should now have t = t2.)
(f) Return to the top of the loop—2
3. Stop
Simulating Split Events for One Particle
As a first step, the method used in DSA was applied to individual particles. The
inhomogeneous Poisson process referred to in §2.3.5 was generated by repeatedly
calculating the exponentially distributed waiting times and performing events at
their end points. This method is referred to as ‘pp’ (Poisson Process) in later
sections.
In order to reduce further the computational effort a second level of splitting
was introduced. By dealing with the split processes one by one and making a
further approximation, it is only necessary to generate one random variable per
process per particle as follows:
1. Calculate the rate for the selected process at the start of the interval.
58
2. Assume this rate applies throughout the interval (e.g. ignoring the effects of
the particle size changing) and so calculate the expected number of events
λ.
3. Generate a random variable with the Poisson (λ) distribution as the num-
ber of events which actually occur and update the computational particle
accordingly.
This method is referred to as ‘pv’ (Poisson Variable) in later sections.
Complexity
The split surface processes were performed without using the binary tree; this
avoids the O (log n) update operations (n is the number of stochastic particles,
which should be roughly the same as the capacity of the binary tree) after each
event that were slowing the simulations down. Instead, after performing surface
reactions, an O (1) operation on each of the n ∼ O (V ) particles individually for
the full length of the splitting time step, the binary tree was completely rebuilt
which takes O (n) ∼ O (V ) time.
Therefore the surface processes, which previously had overall complexity
O (V log V ) in the sample volume V , now only had complexity O (V ), which
is an improvement over the complexity reported in §2.3.3. Since the surface pro-
cesses dominate the simulation (see table 2.3), this should reduce the apparent
complexity of the simulation.
3.2 Numerical Results with Splitting
3.2.1 Comparison of JW1.69 Moments
Some results, which were generated using the implementations of operator split-
ting described above, are presented for JW1.69 (one of the laminar premixed
flames studied in [93]). They are compared to DSA results generated as described
in chapter 2. JW1.69 was used because it provided a realistic test case that could
be simulated quickly and also for historic reasons.
59
Table 3.1 gives a summary of the simulations performed and the computer
time required for their calculation. All timed simulations were performed using a
binary tree of depth 13 and with an initial sample volume of 3.5× 10−8cm3. The
computer used had a 2.4 GHz Pentium 4 CPU, except in the cases marked with ‘†’which indicates a 2 GHz Athlon CPU, which gave almost identical performance
on a test case performed on both machines.
Table 3.1: Detail of operator splitting runs for JW1.69
algorithm r time for 10 repetitions / minDSA 0 50pv 1 30pv 2 16pv 5 8pv 5 8†pv 10 5.5†pp 5 12.5†pp 10 10†
With the ‘pv’ algorithm one sees a considerable speed increase moving from
r = 0 (DSA) to r = 5 with speed being almost linear in r, but changing r from
5 to 10 makes much less difference to the execution time. One therefore expects
the bottleneck to be the inner loop of the algorithm, see §3.1.2. The ‘pp’ splitting
leads to longer run times than the ‘pv’ because each surface event is handled
individually.
However, speed is not everything; one has also to consider the error introduced
by the splitting approximation, so in figure 3.1 the time evolution is shown for
the second moment of the particle distribution in a small volume of gas passing
through the flame. To show the error introduced by the splitting results obtained
using DSA with 30 repetitions, a binary tree depth of 17 and an initial sample
volume of 5.6 × 10−7cm3 are also plotted. The numerical parameters used to
generate the DSA results were chosen to make computational errors negligible on
the scale shown. Figure 3.1 shows that the splitting only introduces a very modest
60
0
0.5
1
1.5
2
2.5
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 0.02 0.04 0.06 0.08 0.1 0.12
dsa 17
pv 2
pv 10
pp 10
parti
cle
num
ber d
ensi
ty /
1011
cm
-3second m
ass mom
ent / 10-24 g
2cm-3
time / s
Figure 3.1: Number density and second mass moment for JW1.69
61
error in the second moment of the distribution, at least for r ≤ 5, compared to the
high precision DSA. For lower order moments the error is smaller.
3.2.2 Particle Distribution Accuracy
An advantage of direct simulation is that one gets a complete description of the
particle size distribution. Particle size distributions are presented here for the
flame JW10.68 (see [93]). For comparison with table 2.3 an event count for
JW10.68 is given in table 3.2. These are the number of events in 20 repeti-
tions of the simulation, with a tree depth of 10 and an initial sample volume of
1.25× 10−9 cm3.
Table 3.2: Relative frequency of stochastic events in JW10.68
time steps 2.06× 109
surface events (not pyrene) 2.05× 109
pyrene condensation 1× 107
coagulation 4× 105
inception 2× 105
JW10.68 is a much sootier flame than JW1.69 so it provides a test of the
method in a situation where soot is really significant and, as can been seen, it is
also far more computationally demanding. Table 3.3 gives details of simulations
performed using a binary tree depth of 11 and an initial sample volume of 2.5 ×10−9cm3 on the PCs described above.
The acceleration due to splitting is much larger for JW10.68 than for JW1.69
(see table 3.1)—a factor of 30 compared to one of 6 for DSA to ‘pv’ r = 5.
Further increases in r were found to offer much smaller speed gains with the ‘pv’
algorithm, while the pp algorithm times appear to have only a weak dependence
on r, suggesting the time limiting step is the event by event simulation of the split
processes.
Figure 3.2 shows the particle distributions obtained from some of the simu-
lations described in table 3.3. Also shown is the average from a set of 10 high
62
Table 3.3: Detail of operator splitting runs for JW10.68
algorithm r time for 10 repetitions / minDSA 0 4020pv 5 134pv 10 86pp 20 250pp 50 235
precision DSA runs using a binary tree depth of 14, these took 13 days of PC
time to generate! This last set of data is shown so that any error introduced by the
splitting procedure can be seen.
3.2.3 Profiling of Operator Splitting
Operator splitting yields a considerable acceleration from DSA—see table 3.3,
but the computation times are still substantial and would be noticeable even with
1000 stochastic particles (tree depth of 10). Therefore it makes sense to repeat
the analysis of §2.3.5 and see if further improvements can be made. Figure 3.3
shows how the computation time is used during a ‘pv’ algorithm run with the
parameters used for the DSA profiling in §2.3.5. “Poisson processes” refers to the
approximation of the surface event Poisson processes by Poisson random variables
for each particle during the splitting steps.
From these data one can see that simulating the surface reactions is still by
far the most computationally intensive task in the simulation even for JW1.69,
a flame with relatively little surface growth compared to a sooty flame such as
JW10.68. Note also that a substantial amount of time is spent approximating the
Poisson processes that represent the surface reactions during splitting. A further
approximation is therefore developed in §3.3.
63
20 40 60 80 100 120 140 16010
5
106
107
108
sphere equivalent diameter / nm
num
ber
dens
ity /
cm−
3
referenceDSApv r=5pv r=10
Figure 3.2: Particle size distribution at end of flame JW10.68
64
rebuilding binary tree30%
pyrene events5%
other (splitting related)
20%
unclassified5%
Poisson processses
40%
Figure 3.3: Percentage of execution time spent on different tasks with ‘pv’ method
3.2.4 Conclusions on Operator Splitting
Operator splitting using the ‘pp’ and ‘pv’ methods provides a substantial perfor-
mance gain from DSA, in the most striking case reducing a run time by a factor
of almost 10. The approximations made to achieve this acceleration have a small
effect on the results and a particular consequence of this is that the approximation
error does not depend very strongly on the size of the time splitting steps (within
the range of values considered). Compared with the uncertainties in the simulation
of the gas phase chemistry (see for example Smooke et al. [182]), the systematic
error should not be a cause for concern.
3.3 Deferment of Surface Reactions
A new approach2 was to tag each stochastic particle with the time when it is added
to the ensemble so (stochastic or computational) particles are described by el-
ements of the space E = E × [0,∞). Some surface reactions (possibly now
2Due to a suggestion of Prof. J. Norris of the Statistical Laboratory, University of Cambridge.
65
including pyrene condensation) can be selected for ‘deferment’, such reactions
are completely neglected as the simulation calculates time steps. Once a particle
has been selected for an event, the deferred events are simulated from the time
recorded in the tag up to the current time; only then does the non-deferred event
take place. The tag is then reset to the current time so that the procedure can be
repeated.
3.3.1 Measure Theoretic Formulation
In order to derive the generator for a stochastic jump process implementing this
idea it is helpful to work in terms of measures, replacing
n : [0,∞)× E → [0,∞)
by a family of measures giving the computational particle distribution on E at
each time.
To do this, endow E with a σ-algebra E that makes the g(l), the β(l)t , Kt, +
and I (t, ·), as defined below, measurable. The R valued functions Kt and β(l)t
are extended to E by saying they ignore the time stamp component; g(l) and +
(strictly g(l)t and +t) by saying the time stamp of their result is the current time.
Let M be the space of measures on E with the weak topology and the associ-
ated Borel σ-algebraM. The particle distribution is described by a mapping from
time to M
λ : [0,∞)→M ; t 7→ λt = λ (t, ·) .
So that for A ∈ E the number of particles per unit volume described by ele-
ments of A at time t will be λt (A).
The particle inflow rate will be given by a similar map
I : [0,∞)→M ; t 7→ It = I (t, ·) .
The rate of inflow of particles with descriptions in A at time t is It (A).
66
Equation (1.4) can now be written for A ∈ E as
d
dtλt (A) =
∫x∈A
It (dx)
+∑l∈U
[∫x∈E:g(l)(x)∈A
β(l)t (x)λt (dx)−
∫x∈A
β(l)t (x)λt (dx)
]+
1
2
∫y,z∈E:y+z∈A
Kt (y, z)λt (dy)λt (dz)
−∫
x∈A,y∈E
Kt (x, y)λt (dx)λt (dy) .
(3.4)
Hence for φ in a suitable class of test functions C one has a weak form, (see
e.g. [51])
d
dt
∫x∈E
φ (x)λt (dx) =
∫x∈E
φ (x) It (dx)
+∑l∈U
∫x∈E
β(l)t (x)
[φ(g(l) (x)
)− φ (x)
]λt (dx)
+1
2
∫x,y∈E
[φ (x+ y)− φ (x)− φ (y)]
Kt (x, y)λt (dx)λt (dy) .
(3.5)
This includes the time stamp without really making use of it in λ, the repre-
sentation of the particle population.
3.3.2 Deferred Surface Process Operator
Now define a family of mappings Rt : t ∈ [0,∞) which map x ∈ E to a random
variable Rtx ∈ E × t ⊆ E with the distribution that a particle with description
x would have at time t in the absence of deferment. This only makes sense if t is
later than the time stamp in x.
Also define an ‘update operator’
Pt : M →M
67
by
Pt (λ) (A) =
∫x∈E
P (Rtx ∈ A)λ (dx) . (3.6)
Let U ′ ⊆ U3 be the possibly empty set of surface reactions that are not de-
ferred. One now wants to define a ‘deferred solution’ λ via an equation like (3.5)
such that, for any φ ∈ C
Φ (λt) = Φ(Pt
(λt
))(3.7)
where
Φ (ν) :=
∫x∈E
φ (x) ν (dx) (3.8)
and λ solves (3.5).
A fairly obvious approach is
d
dt
∫x∈E
φ (x) λt (dx) =
∫x∈E
φ (x) It (dx)
+∑l∈U ′
∫x,ξ∈E
[φ(g(l) (ξ)
)− φ (x)
]β
(l)t (ξ)P (Rtx = dξ) λt (dx)
+1
2
∫x,y,ξ,ζ∈E
[φ (ξ + ζ)− φ (x)− φ (y)]Kt (ξ, ζ)
P (Rtx = dξ)P (Rty = dζ) λt (dx) λt (dy) .
(3.9)
This defines a (or possibly some, since uniqueness is hard to establish) determin-
istic map from time to the soot population. However, while the resulting λ may
satisfy (3.7), there are real problems using it as a basis for a simulation. Firstly
there is the usual need to incorporate a majorant coagulation kernel (see e.g. [68]).
Secondly, even given a majorant, one still cannot handle individually the rates of
the infinity of possible jumps given by Rt.
Therefore an equation was developed that allows for the use of surface growth
generated for events that prove to be fictitious. The basic idea is to find β and K
3For the definition of U see (1.7)
68
such that
β(l)t
(x; λt
)≥ β
(l)t (Rtx) (3.10a)
and
Kt
(x, y; λt
)≥ Kt (Rtx,Rty) (3.10b)
as it is then sufficient to calculate rates for a particle using a majorant without
knowing anything about deferred events. The deferred events can then be incor-
porated when another event is actually performed. In addition, if a particle is
selected for a potential event that eventually turns out to be fictitious, postponed
processes can be simulated on it anyway and the updated particle returned to the
computational ensemble thus reducing the deferment error.
Given the random nature ofRtx it is likely to be hard to find β and K to satisfy
(3.10) with probability 1. Therefore the equation 4
4Note: for a, b ∈ R a+ ≡ max (a, 0) and a ∧ b ≡ min (a, b)
69
d
dt
∫x∈E
φ (x) λt (dx) =
∫x∈E
φ (x) It (dx)
+∑l∈U ′
∫x,ξ∈E
[φ(g(l) (ξ)
)− φ (x)
] [β
(l)t (ξ) ∧ β(l)
t
(x; λt
)]P (Rtx = dξ) λt (dx)
+∑l∈U ′
∫x,ξ∈E
[φ (ξ)− φ (x)][β
(l)t
(x; λt
)− β(l)
t (ξ)]
+
P (Rtx = dξ) λt (dx)
+1
2
∫x,y,ξ,ζ∈E
[φ (ξ + ζ)− φ (x)− φ (y)][Kt (ξ, ζ) ∧ Kt
(x, y; λt
)]P (Rtx = dξ)P (Rty = dζ) λt (dx) λt (dy)
+1
2
∫x,y,ξ,ζ∈E
[φ (ξ) + φ (ζ)− φ (x)− φ (y)][Kt
(x, y; λt
)−Kt (ξ, ζ)
]+
P (Rtx = dξ)P (Rty = dζ) λt (dx) λt (dy)
(3.11)
is formulated to make sense, even if the conditions in (3.10) are sometimes vi-
olated. Clearly (3.7) will not be satisfied if (3.10) do not hold, but provided this
violation only happens with very small probability one can hope to recover a close
approximation to (3.7).
3.3.3 Rate Kernel
Define jump operators on the space of signed Borel measures on E by
JVx,y,z,w; µ 7→ µ− V −1 (δx + δy − δz − δw) x, y, z, w ∈ E
and
JVx,y,z; µ 7→ µ− V −1 (δx + δy − δz) x, y, z ∈ E
70
and
JVx,y; µ 7→ µ− V −1 (δx − δy) x, y ∈ E
and
JVx ; µ 7→ µ+ V −1δx x ∈ E,
where V is the sample volume for which events are to be simulated.
For t ∈ [0,∞) and V ∈ (0,∞) define kernels
αVt : M ×M→ [0,∞)
by specifying the value for µ ∈M, C ∈M
αVt (µ,C) =
∫x∈E
1JV
x µ ∈ CV It (dx)
+∑l∈U ′
∫x,ξ∈E
1
JV
x,g(l)(ξ)µ ∈ CV[β
(l)t (ξ) ∧ β(l)
t (x;µ)]
P (Rtx = dξ)µ (dx)
+∑l∈U ′
∫x,ξ∈E
1JV
x,ξµ ∈ CV[β
(l)t (x;µ)− β(l)
t (ξ)]
+
P (Rtx = dξ)µ (dx)
+1
2
∫x,y,ξ,ζ∈E
1JV
x,y,ξ+ζµ ∈ CV[Kt (ξ, ζ) ∧ Kt (x, y;µ)
]P (Rtx = dξ)P (Rty = dζ)µ (dx)µ (dy)
+1
2
∫x,y,ξ,ζ∈E
1JV
x,y,ξ,ζµ ∈ CV[Kt (x, y;µ)−Kt (ξ, ζ)
]+
P (Rtx = dξ)P (Rty = dζ)µ(2) (dx, dy)
(3.12)
where, for A,B ∈ E, µ(2) (A×B) = µ (A)µ (B)− µ (A ∩B) .
Using the kernels define generators for stochastic processes on M by
GVt (Ψ) (µ) =
∫ν∈M
[Ψ (ν)−Ψ (µ)]αVt (µ, dν) (3.13)
71
for Ψ : M → R in a suitable class of test functions (not necessarily of the form
given in (3.8)) and assuming (3.10) one can approximate (3.11) as
d
dtΦ(λt
)= GV
t (Φ)(λt
)(3.14)
which has an error that is O (V −1) as V → ∞. By analogy with the work of
Eibeck and Wagner (see [48] for a basic introduction and [51] for their latest
generalisation), the stochastic jump processes µVt given by the generators GV
t con-
verge in distribution as V →∞ to a deterministic solution of (3.11) provided the
initial conditions also converge. One then expects Pt
(λt
)to converge in distribu-
tion to a deterministic solution, λt, of (3.5) in the same limit, that is, to be a weak
solution of the extended form of Smoluchowski’s coagulation equation (1.4).
3.3.4 Implementation of Deferment
The implementation of this Linear Process Deferment Algorithm (LPDA) is es-
sentially an application of DSA to the processes given in (3.13).
It is worth noting the procedure at the times (generally every millisecond)
when statistics on the size distribution of the particles are collected and occasion-
ally the entire distribution is recorded. At these times, all deferred growth due to
all particles is simulated so that the size distribution is fully up to date. This has
the convenient effect of ensuring no stochastic particle is left for too long before
its deferred surface growth is performed. One can therefore think of LPDA as a
form of splitting but with a ‘just in time’ feature to remove the main source of ap-
proximation error. However, with LPDA the number of times the entire ensemble
has to be updated is orders of magnitude less than for ordinary operator splitting
and so for practical purposes it is a completely different algorithm.
Complexity
The (random) number of deferred events due to any particular particle is O (1) in
the sample volume V . There are O (V ) non-deferred events in a run and each of
these requires use of the binary tree which takes O (log V ) per event (see §2.3.3).
72
Hence the overall complexity is still O (V log V ) in the (initial) sample volume
for reasonable choices of tree depth. However in §3.1.2 it was noted that most
computational effort seemed to go on surface reactions and so if all these were
deferred the complexity might not be observed for realistic values of V .
3.3.5 Numerical Results
Accuracy of Moments
The same JW1.69 test case was used, with a tree depth of 13, as in §3.2.1. For the
calculations reported in this chapter and chapter 4 all surface reactions apart from
pyrene condensation were deferred, that is,
U ′ = pyrene condensation . (3.15)
Pyrene condensation was not deferred because of the relative complexity of β(pyr)
and the potentially large change in particle size caused by a single pyrene conden-
sation event5. Ten runs took 250 seconds, an appreciable acceleration compared
to the operator splitting cases. Figure 3.4 compares the zeroth and second mo-
ments of the mass distribution obtained with LPDA to the DSA precision run and
to the ‘pv’ r = 10 run. One can see that LPDA slightly overstates the number
of particles compared to the other two methods at least until 0.08 s. Interestingly,
with the second moment, LPDA tracks the operator splitting solution until about
0.1 s and then moves much closer to the DSA solution. The first moment is not
shown, but here LPDA is virtually indistinguishable from DSA on the scale of the
figures and noticeably better than operator splitting.
Particle Distribution Accuracy
As with operator splitting this topic is investigated for JW10.68; a summary of the
simulations is given in table 3.4. The deferred surface growth computations were
performed on the same machines used to generate results for §3.2.1. As before ‘†’5The work in chapter 5, carried out at a time when LPDA was better understood, ultimately
showed that it is acceptable to take U ′ = , the empty set, by deferring pyrene events.
73
0
0.5
1
1.5
2
2.5
0 0.02 0.04 0.06 0.08 0.1 0.12
dsa 17pv 10lpda
parti
cle
num
ber d
ensi
ty /
1011
cm
-3
time / s
0
0.2
0.4
0.6
0.8
1
1.2
1.4
0 0.02 0.04 0.06 0.08 0.1 0.12
dsa17pv 10lpda
seco
nd m
ass
mom
ent /
10-2
4 g c
m-3
time / s
Figure 3.4: LPDA applied to JW1.69
74
denotes the Athlon machine. One observes very considerable speed gains from
using LPDA.
Table 3.4: Detail of deferred surface growth runs for JW10.68
number of tree levels time for 10 repetitions / minDSA 13 8497
LPDA 15 15 22†LPDA 14 14 11†LPDA 13 13 5.25†LPDA 12 12 2.5†
The LPDA run time with 13 tree levels is more than 3 orders of magnitude
less than that for DSA which is a very positive result. Figure 3.5 shows that the
particle distribution is well reproduced using LPDA.
3.3.6 Conclusions on LPDA
This method is extremely attractive for further use in the simulation of laminar
premixed flames because of the large speed gains possible without introducing
large errors. The comments on the errors made in §3.2.4 apply here as well.
The initial sample volume was scaled in direct proportion to the tree capacity
for all the runs, for 13 tree levels an initial sample volume of 1 × 10−8 cm3 was
used. It is therefore interesting to see the run times showingO (V ) behaviour, not
O (V log V ) as discussed in §3.3.4.
3.4 Comparison of Simulation Methods
3.4.1 Equal Numbers of Computational Particles
A clear ordering in terms of speed can be seen in table 3.5. The differences in the
particle distributions produced using the different methods are not material.
75
0 20 40 60 80 100 120 140 16010
5
106
107
108
JW10.68 PSDF
sphere equivalent diameter / nm
num
ber
dens
ity /
cm-3
dsa 14lpda 13lpda 15
Figure 3.5: Particle distribution for JW10.68
76
Table 3.5: Comparison of simulation methods
algorithm flame number of tree levels time for 10 repetitions / minDSA JW10.68 13 8497
pv r = 5 JW10.68 13 844LPDA JW10.68 13 6.33DSA JW1.69 13 50
pv r = 5 JW1.69 13 8LPDA JW1.69 13 4.2
The ratios of run times between methods are very different for the two flames
considered, as are the total run times. Unsurprisingly, the greatest accelerations
were achieved for the problem that takes the most CPU time.
3.4.2 Equal Precision for JW1.69
Finally, the time required to obtain results of comparable quality using the three
main algorithms discussed in this chapter is considered. Result quality is mea-
sured using the second moment of the particle size distribution on exit from the
flame. The reference value is the mean obtained from 30 runs using basic DSA, 17
tree levels and an initial sample volume of 5.6× 10−7 cm3. Table 3.6 gives details
of parameters which lead to 99.9% confidence intervals for the second moment
being entirely contained in an interval of ±10% around the reference value.
Table 3.6: Time to achieve tolerances for JW1.69
algorithm number of tree levels repetitions total time / sDSA 10 30 990DSA 11 15 645
LPDA 11 15 98LPDA 12 10 133
pv r = 5 12 20 500pv r = 5 13 10 480
77
The parameters shown were selected as the least multiples of 5 runs that led
to adequate results for the second moment. More runs or more tree levels were
required for LPDA and operator splitting because these methods slightly bias the
mean value of the second moment and so tighter confidence intervals are needed
to stay within the ±10% range.
3.4.3 Summary
The ‘pv’ method and LPDA both seem to provide a good level of accuracy and
offer significant savings over the DSA in computation times. For a given level of
accuracy LPDA required the least amount of computer time and so is the method
of choice. LPDA was implemented in a way which preserved the stochastic nature
of reactions of the surface of soot particles. This raises the possibility of achieving
further computational savings by using a simpler, deterministic approximation for
the surface reactions.
78
Chapter 4
Deterministic Simulation of theSurface Processes
The previous chapter focussed on the acceleration of the basic DSA while pre-
serving the random nature of the chemical reactions with the surfaces of soot
particles. However, the high rates of these reactions suggest, that on all time
scales of interest the variability in the number of reactions might be small. In this
case, deterministic approximations should offer simpler (hence faster) simulation
techniques without significant loss of accuracy.
The exploration was carried out within the frameworks of the basic operator
splitting algorithm set out in chapter 3 and of the LPDA presented in the same
chapter. The coalescent sphere particle model (§1.4.1) was used throughout so
that all particles were always spherical. Deterministic approximations to random
variables have been successfully used as an acceleration technique for cell simu-
lation in biology [94]. The mathematics of the deterministic approximation and
some intermediate stages are set out in a little more detail in [66], where it is
applied with success to the simulation of chemical reactions.
79
4.1 Operator Splitting
Operator splitting provides the simplest situation in which to test numerical meth-
ods for including the effects of surface reactions in simulations. The basic algo-
rithm is detailed in §3.1.1 & §3.1.2, however, the benefits of using a more complex
splitting system, such as a Strang splitting, remain to be investigated. To cope with
the wider variety of algorithms studied in this chapter, the names used in chapter 3
have been expanded to form a more descriptive scheme. The ‘pv’ algorithm from
chapter 3 is denoted pv2s in this chapter; ‘pv’ for Poisson variable as previously,
‘2’ because there are 2 levels of splitting and ‘s’ because the top level splitting
time step is specified in terms of the rates of the split processes.
4.1.1 Modified Choice of Top Level Time Step
For the principal test case, the flame JW10.68 considered previously, it was found
that considerable performance benefits accrued if the length of the splitting time
steps was chosen in terms of the rates of the unsplit processes, rather than the
split processes as in chapter 3. The length of the time over which one part of the
split operator was applied before the other part of the operator was applied was
chosen so that the expected number of non-split events per computational parti-
cle during this interval was r. This constitutes a small amendment to point 2a)
of the algorithm in §3.1.2. Using this new choice of time step, r could be set to
0.1 which reduced run times by a factor of 30 compared with the old method in
the case r = 20. This acceleration was achieved while still providing the same
accuracy obtained with the old method for r = 5. Using a value of r = 0.02 with
the new method offered an acceleration by a factor of 4 over the old method with
r = 5, at the same time as a gain in accuracy. Accuracy was quantified as the
deviation of the lower order moments from the high precision DSA results used
as the reference solution in this chapter and in chapter 3. Times are not directly
comparable to those reported in chapter 3 because the work reported in this chap-
ter was done with a different compiler running on slightly different hardware. The
modified algorithm is denoted pv2j; for the meaning of ‘pv2’ see the start of §4.1
80
above and ‘j’ is used to indicate that the top level splitting time step is specified
in terms of the rate of the non-split processes, which are simulated as a compos-
ite jump process. The same splitting strategy is applied, using a different Monte
Carlo technique, to a slightly different physical system in [47].
4.1.2 Elimination of second level of splitting
For the flame used as the main test case in the first part of this chapter, JW10.68, it
was found that the second level of splitting in the ‘pv’ algorithm of chapter 3 was
redundant, since the top level splitting steps were already of the order of the time
over which the chemical conditions varied. Reducing to the 1 level algorithms
pv1s and pv1j had a negligible effect on the results produced by the simulations
and, in most cases, made little difference to the run times. In other systems the
chemistry could vary over time scales much shorter than the 1 level splitting step
length, so the more general applicability of these methods will be discussed fur-
ther. Most of the work in this chapter will use JW10.68 as a test case, since
it enables splitting strategies to be considered without additional complications
arising from a second level of splitting.
4.1.3 Initial Trial of Deterministic Method
In each place in the pv1j algorithm, where a Poisson random variable was gener-
ated to give the number of occurrences of a particular kind of surface event, the
(pseudo-)random value was replaced with its mean. This algorithm is denoted
e1j; ‘e’ for Euler, ‘1’ because there is still 1 level of splitting and ‘j’ for the rea-
son given above in connection with the pv algorithms. To provide a check on the
results obtained by implementing the e1j algorithm, the existence of an analytic
solution to the spherical particle growth problem was exploited.
Under the assumption of deterministic, continuous surface growth propor-
tional to particle surface area and equating the type of a particle, x, with its mass,
81
one has, for some function k of the chemical environment
d
dtx = k (t)x
23 . (4.1)
This may be solved for individual particles, giving
x (t1)13 − x (t0)
13 =
1
3
∫ t1
t0
k (t) dt. (4.2)
Since k (t) is known in advance (§1.4), the numerical calculation of the right hand
side of (4.2) and hence the change in particle mass is simple.
Therefore (4.2) was used to calculate the total size change due to all split
processes as an alternative to the Euler method just described. This alternative
method is denoted by as1j with ‘as’ standing for analytic splitting, and ‘1j’ having
the same significance as above. The deviations from the reference solution for the
first three moments of the soot particle distribution are given for the various algo-
rithms and two values of r in table 4.1. It is interesting to note that the moments
are never underestimated by the 1 level splitting algorithms; the explanation of
this would seem to be that splitting prevents small particles from being oxidised
out of the distribution, because it treats surface oxidation and growth together. The
e1j and as1j algorithms show close agreement, offering some assurance that they
were implemented correctly. One sees that these deterministic splitting methods
overstate m11 and, to a greater extent, m2, even in relation to the pv1j method,
indicating that e1j and as1j over-predict soot particle sizes. Unlike for pv1j, the
choice of r seems to have little effect on the errors for e1j and as1j.
Times in seconds for 10 runs on single AMD Athlon XP3000+ Linux PCs are
given in table 4.2 for simulations using each algorithm. The same settings—initial
sample volume of 2.5× 109cm3 and binary tree depth of 11 were used in all cases
so that the number of computational particles varied between 1024 and 2047 in
all the simulations after the initial inception peak.
The slight speed advantage of as1j over e1j will be connected to the fact that in
the as1j algorithm the effects of all split processes are calculated together, whereas1m1 is used here for M1 as defined in (1.2), m0 and m2 should be understood in the same way.
82
Table 4.1: Numerical errors in JW10.68 distribution moments
time / s r (see text) algorithm m0 m1 m20.08 0.02 pv1j 0% 0% 1%
e1j 0% 4% 9%as1j 1% 5% 8%
0.1 pv1j 1% 2% 3%e1j 1% 4% 8%as1j 1% 5% 7%
0.16 (exit) 0.02 pv1j 1% 1% 1%e1j 1% 4% 7%as1j 1% 5% 9%
0.1 pv1j 1% 2% 5%e1j 2% 5% 8%as1j 2% 5% 7%
Table 4.2: Run times in seconds for different algorithms on the same hardware
r pv1j e1j as1j ad1j0.02 560 397 259 3100.1 138 92 76 81
83
for e1j this step has to be repeated for each split process. On the basis of this data,
either pv1j or as1j is to be preferred for further use, but not e1j since it is slower
than as1j, while producing similar results. The choice between pv1j and as1j will
depend on the relative emphasis placed on accuracy and computational speed.
4.1.4 Adaptive splitting
In this section an algorithm denoted ad1j is introduced where ‘ad’ stands for adap-
tive in a sense that will be explained, and ‘1j’ has the same significance as above.
As mentioned in §4.1.3, splitting seems to introduce an error by preventing small
particles being oxidised out of the soot population, because oxidation quickly fol-
lowed by surface growth can lead to a zero or positive net size change. To test this
explanation of the observed error and to try to reduce the overall error caused by
assuming deterministic surface growth in the as1j and es1j algorithms, even when
a very small number of events was probable, the as1j algorithm was modified to
use a stochastic method for particles when the expected number of events within a
second level splitting step was less than 5. The stochastic method called the ‘pp’
algorithm in chapter 3 (which corresponds to direct simulation of the split pro-
cesses for single particles) was used in this case, as it addresses the small particle
oxidation issue. Table 4.3 contains the same data for the ad1j algorithm as that
given in table 4.1 for the earlier algorithms.
Table 4.3: Numerical errors in JW10.68 distribution moments
time / s r algorithm m0 m1 m20.08 0.02 ad1j 0% 0% 0%
0.1 ad1j -1% 0% 2%0.16 (exit) 0.02 ad1j 0% 0% 1%
0.1 ad1j -1% 1% 3%
These results show the ad1j algorithm can reproduce the reference solution
very successfully, and is at least as accurate as the pv1j algorithm. An implemen-
tation of the ad1j algorithm is naturally slower than a comparable implementation
84
of as1j, but it is still faster than e1j (see table 4.2). The negative deviation of m0
from the reference value with r = 0.1 is consistent with the claims made above
about the oxidation of very small particles, but the difference is too small to allow
any firm conclusions.
4.1.5 Testing with a second flame
The above algorithms were checked for a second flame—JW1.69. The pv2s al-
gorithm with r = 5, 20 produced similar results to pv1s for the same r. With
r = 0.02, 0.1 pv2j produced similar results to pv1j for the corresponding r, which
is not surprising given that the top level time steps were already on the time scale
of the variation in the chemistry. The time differences between the 1 and 2 level
algorithms were negligible. However, unlike for JW10.68, the ‘s’ versions of the
pv1 and pv2 algorithms were faster than the ‘j’ versions. (The most likely cause
of the difference in performance between the two flames is that the relative fre-
quencies of the split and unsplit events are quite different, for details see table 2.3
and table 3.2.) Implementations of pv1j and pv2j with r = 0.1 took 16 minutes
for 20 repetitions of JW1.69 with the number of computational particles varying
between 4096 and 8191, whereas pv1s and pv2s with r = 5 only took about 10
minutes. The results from pv1j and pv2j in this test were at least as accurate as
those from pv1s and pv2s.
The e1s, e2s and e1j algorithms with the r values just mentioned all produced
similar output to each other when applied to JW1.69. However, the moments cal-
culated with these three algorithms were very different from those of the reference
solution—the error reached about 100% for m2. The analytic splitting method
as1j with r = 0.1 produced moment values close to those of as1s with r = 5.
Both ‘as’ methods were less inaccurate than the ‘e’ algorithms. It is not clear
why as1j should lead to different moments from those obtained with e1j and e2j.
Algorithm ad1j, the variant of as1j, produced the same high accuracy for JW1.69
that it had produced for JW10.68, but was still slower than pv2j with r = 5 (13.5
minutes compared to 10.5).
The best algorithm for the two flames investigated would seem to be ad1j with
85
r = 0.1. It works well for the flame JW10.68, which led to particularly slow DSA
calculations, and is also satisfactory when applied to JW1.69.
4.2 Deterministic Simulation of Deferred Events
Results from the profiling of an LPDA simulation of JW1.69 are reported in fig-
ure 4.1. They show that 30% of the computation time was spent on the top level
stochastic stepping for the processes simulated as in the DSA—particle incep-
tion, pyrene condensation and particle coagulation. Simulating individual coag-
ulation events took a further 13% of the time, some of which was attributable
to the need to perform deferred events on coagulation candidates. About 50%
of the computation time was required for the simulation of pyrene condensation
events. Approximately one third of this 50% was spent simulating deferred events
on particles selected to undergo pyrene condensation, and another third on updat-
ing the binary tree structure with the results of the event. Given the similarities
in the implementations of coagulation and pyrene condensation, it seems reason-
able to expect a similar breakdown of the 13% of computation time spent on the
simulation of coagulation events. Most of the computation time not accounted for
above was spent in the simulation of deferred events for all particles at times when
data was recorded. The simulation of particle inception did not require significant
amounts of time.
The profiling reported in figure 4.1 shows that pyrene condensation is the most
time consuming part of the simulation after the other surface reactions have been
removed from the DSA style simulation process according to (3.15). This is con-
sistent with the event counts presented in table 3.2. If the simulation of deferred
events could be accelerated so that the time required was a small fraction of that
in the profiled program one could look for a reduction of up to 20% in the total
simulation time. Given that a considerable amount of work had already been done
to accelerate the binary tree operations and the DSA simulation code, there were
only two sensible routes to try to achieve further acceleration. Firstly, one could
try to incorporate the pyrene condensation in the framework developed for the
86
other time step overheads
15%
pyrene (deferred surface growth)
20%
pyrene (updating ensemble)
14%
coagulation13%
other (pyrene event related)
17%
unclassified1%
updating entire ensemble
6%
calculating stochastic step
length14%
Figure 4.1: % of execution time spent on different tasks—LPDA with ‘pv’
87
other, more simple surface reactions. (This was successfully done in the bivariate
simulations of chapter 5.) The second approach, described here, is to look for
ways to accelerate the simulation of deferred processes for particles involved in
non-deferred events.
4.2.1 Results with LPDA
On the basis of the work on operator splitting above, the effects of using the ad1
algorithm to perform the deferred processes within LPDA was investigated. A
summary of the output for JW10.68 is given in table 4.4 and suggests that incor-
porating an ‘ad’ algorithm into LPDA slightly increased the accuracy from that
achieved in the initial version (using pv2).
Table 4.4: Numerical errors in JW10.68 distribution moments
time / s algorithm m0 m1 m20.08 ad2 0% 0% 0%
ad1 1% 3% 5%pv2 2% 4% 7%pv1 0% -2% -4%
0.16 (exit) ad2 1% 3% 7%ad1 2% 4% 7%pv2 -1% -2% -3%pv1 0% -8% -14%
It can also be seen that pv1 and pv2 lead to significantly different results (un-
like for the splitting situation). Tests on JW1.69 show the same relative perfor-
mance from the sub-algorithms used for the deferred processes. Run times (in
seconds) for LPDA simulations with the 4 different sub-algorithms for the de-
ferred process are given in table 4.5 for the same settings used when measuring
the run times quoted in previous sections of this chapter. The ad1 treatment of the
deferred processes reduces the computation time by about 20% for both flames
considered, which is the maximum suggested by the profiling discussed above.
88
Table 4.5: Run times in seconds for different sub-algorithms on the same hard-ware
sub-algorithm pv2 pv1 ad2 ad1JW10.68 52 43 47 41JW1.69 354 308 337 298
The times using the pv1 sub-algorithm show much of the reduction is due to the
removal of the time stepping within the simulation of deferred processes.
4.3 Recommended Algorithm
When using the ad1j r = 0.1 splitting algorithm (the algorithm recommended as
the best of the splitting methods, see §4.1.5) run times of 13.3 minutes and 81
seconds were achieved for simulations of the flames JW1.69 and JW10.68 respec-
tively. Using LPDA with a pv2 treatment of the deferred surface processes took
5.9 minutes and 52 seconds respectively for the same two test cases. Therefore,
for the two flames tested, LPDA seems to be the best algorithm to use and is to be
recommended as the default option for other flames.
89
Chapter 5
Models for Particle Shape
The focus in this chapter shifts, from the numerical development that has occupied
the thesis so far, to an application of the LPDA, which seems to be the most
suitable algorithm for simulating soot formation in laminar premixed flames. The
work reported in this chapter can be viewed both as a demonstration of the power
of the Monte Carlo simulation approach established in the preceding chapters and
as a contribution to physical model development.
5.1 Background
The fractal nature of large soot particles has been known for some time [102,
186, 232]. The extensive review paper [185] gives an indication of how much
work has been done on experimental techniques, to measure and to account for
the properties of these fractal structures. Detailed consideration of how to account
for the fractal nature of soot particles from road vehicles has even been included
in atmospheric modelling [88].
Although initial results [11, 245, 246] for inorganic nanoparticles show the
potential importance of good models for the aggregate, simulation of the fractal
structure of soot particles is in its infancy. Mitchell and Frenklach investigated
soot aggregation [133–135] by representing a single aggregate particle as a union
of intersecting spheres. The natural extension of that approach to a population
90
of particles is discussed by Balthasar et al. [17] and Morgan et al. [140], but
this is very computationally expensive. A simplification of the model includes
an assumption that particle size and structure are independent, and incorporates
a numerical fit for the functional form of the particle collision diameter. It en-
ables very fast calculations to be made for the lower moments of the particle mass
distribution and mean shape [14]. These ‘method of moments’ calculations of-
fer considerable insight into the development of particle structure, although it is
difficult to assess the validity of various modelling approximations.
Park and Rogak [155] added a partial representation of aerosol particle struc-
ture to a one dimensional sectional technique to simulate particle formation in a
plug flow reactor [156]. Their work differed from the approach of Frenklach and
his co-workers by representing particle structure with a physical quantity—the
average number of primary particles per aggregate, with a separate average be-
ing taken within each size section. To calculate particle collision diameters from
this information, an assumed fractal dimension of 1.8 was imposed [215, 216],
which implies a very open particle structure. This value of 1.8 has been used by
other authors, for example, in [196] and reported from diffusion flame soot [101].
Slightly lower values were reported from light scattering experiments on premixed
methane-oxygen flames in [187].
However, support for values around 1.8 is not universal as 2.2, 2.1 and 2.2
are reported by Maricq et al. [121] at increasing heights in a premixed ethylene
flame. They also report some remarkable comparisons of experimental and mod-
elling data [120], and show fractal dimension decreasing with increasing height
above burner in a laminar premixed flame. A fractal dimension of approximately
1.8 has been reported for randomly diffusing aggregates in a wide range of simu-
lation work, for example, the Brownian trajectory cluster-cluster case of Meakin
[127]. However, it is shown in [19] that small units of surface growth lead to
higher fractal dimensions than pure aggregate—aggregate coagulation. Meakin
also showed that aggregate restructuring can lead to more compact particles, that
is, cause an increase in fractal dimension [128] and it appears that this would even-
tually lead to a fractal dimension of 2.5 [39, 100]. Changes in fractal dimensions
91
are therefore to be expected as soot particles move from chemically rich environ-
ments where they become dense to regions dominated by coagulation. Very rich
flames will emit dense particles with high fractal dimensions [149]. In flames
with a significant coagulation-only post reaction zone, particles will develop a
very open structure with a fractal dimension around 1.8 before leaving the flame.
The influence of fractal dimension on coagulation rates is reported to be small
in the continuum regime (particles large compared to the mean free path) [99] but
to be significant in the free molecular regime [99, 245]. This means, that even in
high pressure flames, fractal structure should be considered in order to calculate
accurate collision rates, because the free molecular regime will apply to small soot
particles. For example, a 5 nm particle in gas at a pressure of 10 bar is in the free
molecular regime. Therefore despite the successful application of imposed fractal
dimension models something more general is also required.
This chapter introduces and reports tests of models for the aggregate structure
of soot particles with fully bivariate (that is, where each particle is described by
two internal co-ordinates, which are not functionally dependent on each other)
simulations. Development of these models is a critical first step in building simu-
lations with realistic models for soot surface chemistry. Without such simulations
it will be impossible to progress from descriptive fits [8, 95, 231] to physical mod-
els for surface growth rates. Simulation of particle structure will also enable more
direct comparisons with experimental data based on particle mobility [109, 240]
and scattering techniques such as those discussed above and, for example, in [208]
and [24]. Readers may also be interested in the work of Muhlenweg et al. [142],
which addresses the question of how to go beyond the spherical particle model for
inorganic particles using sectional methods.
92
5.2 Framework
5.2.1 Flame chemistry model
As stated in §1.4, the basic soot model, an extension to which is described below,
is that of Appel et al. [8]. It includes an assumption of spherical particles and is
based on the gas-phase chemistry from Wang and Frenklach [207]. Soot particles
are assumed to form exclusively through the coalescence of two pyrene molecules
to make a (spherical) particle of 32 C atoms. Surface growth is described by
surface deposition of further pyrene molecules and by the Hydrogen-Abstraction-
Carbon-Addition (HACA) mechanism. Surface oxidation by oxygen (O2) and
hydroxyl radicals (OH) is included. For all the flames examined it was found that
acetylene addition was the main route of mass growth.
5.2.2 Numerical method
The simulations reported in this chapter were performed using the LPDA accel-
eration of DSA with the ‘pv2’ treatment of deferred processes recommended in
§4.3. Further tests, alluded to in §4.2, showed that pyrene growth could be de-
ferred despite leading to much larger jumps in particle size than the other surface
processes. Accordingly, in order to achieve additional savings and to simplify the
software, pyrene condensation events were deferred in all the simulations reported
in this chapter, that is, (3.15) was replaced by
U ′ = . (5.1)
The stochastic particle simulation method developed in the preceding chapters
is particularly well suited to the model exploration work of this chapter, because
the complexity of the individual particle dynamics can be increased without in-
creasing the computational complexity of the calculations. Since the program
complexity was unchanged the amount of computer time required for the bivari-
ate simulations reported in this work was only slightly greater than for univariate
simulations reported earlier in this thesis, which only tracked particle masses; the
93
small increase was due to the doubling of the amount of data to process. The
programs used in this chapter were written by making small extensions to code
previously used for univariate particle simulations.
5.2.3 Particle shape variable
Soot modelling has generally concentrated on particle mass or volume. The equiv-
alence of mass and volume follows from the assumption that all soot material has
the same density. The work reported here uses the value supplied with the soot
mechanism [8] as implemented by its authors [166]: 1.8 g cm−3. The apparent or
effective density [121] of aggregate structures may take a lower value, which is
reflected in the shape modelling in the present work.
In this chapter mass and surface area are used as the independent variables
in the particle descriptions or types, hence the term “bivariate model” used exten-
sively hereafter. Surface area is used rather than the number of primary particles as
in [155] because it gives a description immediately equivalent to that of the shape
descriptor by Mitchell and Frenklach [133–135]. Surface area has also been used
in bivariate simulations of inorganic nanoparticles [172, 223].
5.2.4 Notation
For convenience the following dimensionless functions are defined; the quantities
are divided by their respective values for newly incepted particles (type x0) so they
all take the value 1 at x0. (x0 describes a spherical particle of 32 C atoms—the
assumed form of all newly incepted particles, see §1.4.)
• c(x) the collision diameter of the particle
• m(x) the mass of the particle
• s(x) the surface area of the particle
• v(x) the volume of the particle (= m(x))
94
A shape descriptor d is defined as [14, 135]
d (x) =log (s(x))
log (v(x))(5.2)
and hence setting d (x0) = 23
(the value would otherwise be undefined at x0) yields23≤ d ≤ 1. Note that for a sphere, given x0, any two of d, v and s define the value
of the third, and that d depends on x0 via the normalisation of v and s. A detailed
discussion of the shape descriptor is given by Mitchell and Frenklach [133]; a
sphere has a descriptor of 23, a chain aggregate of particles (any length > 1) of
type x0 has shape descriptor of 1, and, for example, a chain of 100 spheres of
diameter 13 nm has a shape descriptor of 0.79.
5.3 Models for Lengths
5.3.1 Collision Diameter
As mentioned in the introduction, the coagulation rate is much more sensitive
to collision diameter in the free molecular regime than in the continuum regime.
This is not especially surprising given the form of the coagulation kernel between
two particles [99] in the continuum regime:
Kcn(x, y) ∝ (c(x) + c(y))
(1
c(x)+
1
c(y)
). (5.3)
This only depends on the ratio of collision diameters of the two particles not their
absolute value and so some kind of cancellation is expected if both are increased.
However, the free molecular kernel [99]:
Kfm(x, y) ∝(
1
m(x)+
1
m(y)
) 12
(c(x) + c(y))2 (5.4)
scales with the square of the absolute value of the collision diameters.
In the spherical particle model the collision diameters of the particles are sim-
ply taken to be the ordinary diameters of the spheres, and so with the scaling of
95
§5.2.4 c(x) = v(x)13 . In this section ways to proceed for non-spherical particles
are set out.
Mitchell & Frenklach Collision Diameter
In Mitchell and Frenklach [134, 135], the collision cross-section was expressed
in terms of the radius of gyration, which was calculated using a time consuming
Monte Carlo integration. Mitchell [135] also proposed the following model for
the collision diameter, cagg, of an aggregate x comprising spherical particles in
point contact, as a practical approximation for use in further work:
cagg (x) = kv(x)13 + 2 (1− k)×
(av(x)
13
)b
+ v(x)b(a2
13
)b
+ 2b
1b
(5.5)
a = 1.53311
b = −1.35419
k = 0.43074.
The above expression was extended to general particles by interpolation using the
shape descriptor d to give
c(x) = 3
[(d (x)− 2
3
)cagg (x) + (1− d (x)) v(x)
13
]. (5.6)
Balthasar & Frenklach Collision Diameter
Balthasar and Frenklach [14] went further in simplifying the work mentioned
above. They introduced a correlation derived from the more detailed work which
expressed collision cross-section as a function of aggregate volume and shape de-
scriptor:
c(x) = (2.7375d (x)− 0.825) v(x)13 . (5.7)
By making several further approximations, in particular, that at any time the shape
descriptor was the same for all particles, they were able to use the method of mo-
96
ments with interpolative closure (MoMIC)[57] to achieve notable computational
savings.
Arithmetic Mean Collision Diameter
To test the sensitivity of results to the exact collision cross-section model, the
arithmetic mean of the volume and surface equivalent diameters is introduced:
c(x) =1
2
(v(x)
13 + s(x)
12
). (5.8)
It represents the simplest expression that uses the information contained in the
surface area and the volume.
Weighted Geometric Average Collision Diameters
A second alternative introduced was the geometric mean of the equivalent volume
and surface diameters
c(x) = v(x)16 s(x)
14 . (5.9)
This is a member of the family of weighted geometric averages
c(x) = v(x)a s(x)b , (5.10)
which satisfy
3a+ 2b = 1. (5.11)
These turn out to have an interesting connection to assumed fractal dimensions:
consider an idealised aggregate soot particle of volume V and surface area S con-
sisting of np spherical primary particles of diameter dp in point contact with each
other (the model used in [154]). Solving for np and dp gives
dp =6V
S(5.12)
np =S3
36πV 2(5.13)
97
The standard fractal relationship, in this case for collision diameter C and
aggregate volume is
np = k
(C
dp
)D
(5.14)
where k is the fractal prefactor and D the fractal dimension. One then finds [196,
equation 4]
C = 6× (36πk)−1D V 1− 2
D S3D−1. (5.15)
For a general particle x this becomes
c(x) = k−1D v(x)1− 2
D s(x)3D−1 , (5.16)
which, for k = 1, is in the form of (5.10) and satisfies (5.11).
Note, that under the k = 1 assumption (also made by Zucca et al. [244])
using the equally weighted geometric mean collision diameter (5.9) is the same
as assuming a fractal dimension of 2.4 and that a fractal dimension of 1.8 implies
a = −19
and b = 23. The negative value of a is a somewhat counter-intuitive but
cannot easily be dismissed because of the extensive reports of soot particles with
a fractal dimension of 1.8, some of which were referred to in §5.1.
Fractal Prefactor
Setting the fractal prefactor equal to 1 makes (5.14) consistent for a single sphere,
but experimental results [195], simulated aggregates [148] and other theoretical
considerations [106] lead to larger values of k. Oh and Sorensen refer to a number
of different values for k in their introduction to [148], their analysis then shows
that values of k will be higher for aggregates with overlap between the primary
particles.
The fractal relationship (5.14) is often expressed using twice the radius of
gyration, 2Rg, rather than the collision diameter C. For a sphere
C =
(5
3
) 12
2Rg (5.17)
98
suggesting that reported values of k (which vary from near 1 [148] to at least 7
[118]) should be increased by 30% for use here. A simple test for the influence
of k on simulation results will be reported below using k = 1 and k1D = 2 with
a linear interpolation to preserve the correct value for spherical particles. The
interpolation is between k1D = 2 for np ≥ 10 and k
1D = 1 for np = 1 giving
k1D =
np + 8
91 ≤ np ≤ 10 and k
1D = 2 np > 10. (5.18)
More complex approaches have also been used [172], where the fractal dimension
was interpolated as well as the prefactor to reach the correct limit for a single
spherical particle.
5.3.2 Radius of Curvature
Physical Considerations
The soot model [8] implies that particles form as hard spheres when two pyrene
molecules coalesce to form a ball of diameter 0.88 nm. Electron micrographs
[18, 38, 149, 194] show soot particles are composed of far fewer primary particles
than would be the case if all newly incepted particles remained distinct and have
a much lower surface to volume ratio than 0.88 nm diameter spheres. Therefore
any model for surface reactions must lead to a decrease in the surface to volume
ratio of particles and some kind of merging of primary particles [15]. A model of
uniform surface growth on the free surface has been proposed by Balthasar and
Frenklach [15], leading to aggregates of intersecting spheres. This model may be
varied by concentrating reactions around the points of contact between primary
particles [141], but both variations treat the interior of particles as homogeneous.
Analysis of high resolution TEMs suggests that the assumption of homogeneity
will have to be removed when more precise data and realistic models become
available.
99
Bivariate Modelling
In the context of the bivariate model for soot particles such surface reactions (and
any that remove mass) must be defined by the changes they cause in particle mass
and surface area. The particle mass change is obvious—it is the mass of the
deposited molecule—and the volume change is simply this value divided by the
density. For a sphere (initial radius R, surface area S viewed as functions of
volume V ) the change in surface area is
d
dVS =
2
R. (5.19)
The relationship between the volume and surface is controlled by the surface cur-
vature. For a general particle shape the value of R which makes (5.19) true at
a point on the particle surface is known as the radius of curvature (for a general
surface in a three-dimensional space there are two radii of curvature at each point,
for simplicity they are assumed to be equal in this work).
The correct expression for the radius of curvature of non-spherical particles
in the bivariate model considered here is not at all obvious. As discussed above,
the model for curvature should cause aggregate particles to become rounder. The
assumption of rounding was made more precise by dividing it into the following
assumptions:
1. spherical particles must remain spherical in the limit of small volume incre-
ments;
2. all particles must be geometrically possible, in particular the surface to vol-
ume ratio must be achievable;
3. non-spherical particles must become rounder during surface growth and ox-
idation.
Note that (3) can be viewed in a statistical sense; it is not necessary that every
instance of every surface reaction causes a positive increase in roundness.
One consequence of these conditions is that at least two radii of curvature
(these are not the principal radii of curvature from differential geometry) are re-
100
quired for non-spherical particles, one to use when ∆V > 0 and the other when
∆V < 0. The following curvature radii were used in this work, subscript init
denotes the state of the particle immediately before the mass change:
Rgr =
(Sinit
4π
) 12
(5.20)
for growth processes and
Rox =3Vinit
Sinit
(5.21)
for oxidation processes.
In the scaled, dimensionless style of §5.2.4, by (5.19) one has
rgr (x) = s(x)12 (5.22)
and
rox (x) =v(x)
s(x). (5.23)
So for ∆v > 0
∆s =2
3∆v s(xinit)
− 12 (5.24)
and for ∆v < 0
∆s =2
3∆v
s(xinit)
v(xinit). (5.25)
Other definitions are possible and numerical tests of model 4, the simpler case
of all length scales equal, are reported below.
The general ordering of the length scales introduced is
rox(x) ≤ v(x)13 ≤ c(x) ≤ rgr(x) . (5.26)
Values for a couple of typical particle sizes, a smaller particle from low in the
flame and a larger particle from high in the flame, (see figures 5.5 & 5.7) are
given in table 5.1. As expected the smaller particle, of a type found in a region
with high surface growth rates, has a much smaller range of length scales than the
particle that has experienced prolonged coagulation with little surface growth in
101
the post-reaction zone of the flame.
Table 5.1: Length scales for typical JW10.673 particle sizes
volume surf 2Rox vol equiv collision diam 2Rgr
/ area / diam model 2 model 3 /cm3 /cm2 nm / nm / nm / nm nm
1× 10−17 3× 10−11 20.0 26.8 28.8 28.8 30.91× 10−15 1.3× 10−9 46 124 147 159 203
5.4 Model Comparison
5.4.1 Bulk Properties
To investigate the significance of the modelling assumptions, bulk properties of
the particle populations predicted using the different models were compared for
the 10 bar laminar premixed ethylene flame JW10.673 [93], which was also the
focus of [14]. Results are shown in figures 5.1 & 5.2 as functions of Height Above
the Burner (HAB) and a summary of the different models, along with the numbers
used to refer to them in the legends of the figures, is given in table 5.2.
Monte Carlo methods introduce some random noise into the results and in
figures 5.1–5.4 the values shown are averaged over 20 realisations of the Markov
processes. The estimated 95% confidence intervals for the volume fraction and
surface area concentration are less than ±1% of the plotted average values.
The results shown in figures 5.1 & 5.2 indicate that the differences between
the various models for particle shape are generally smaller than the difference
between the spherical particle model and the closest non-spherical model, this is
confirmed by inspecting other moments of the mass and surface distributions (not
plotted). Two pairs of models lead to particularly close results: models 2 & 7, both
of which are omitted from the plots for clarity as their curves were consistently
5% below those for model 3 in both plots. The second almost indistinguishable
pair were models 3 & 6; only data for model 3 are plotted in this chapter. If these
102
0
1
2
3
4
5
6
7
0 0.5 1 1.5 2 2.5 3 3.5
model 1model 3model 4model 5experimental
soot
vol
ume
frac
tion
f v / 10
-6
HAB / cm
Figure 5.1: JW10.673 soot volume fraction
0
0.5
1
1.5
2
2.5
3
3.5
4
0 0.5 1 1.5 2 2.5 3 3.5
model 1model 3model 4model 5
surf
ace
area
per
uni
t vol
ume
/ cm
2 cm
-3
HAB / cm
Figure 5.2: JW10.673 particle surface area concentration
103
Table 5.2: Summary of simulated models
model name c(x) = rgr (x) = rox (x) =
1 spherical v(x)13 v(x)
13 v(x)
13
2 Balthasar(2.7375× d (x)−
s(x)12
v(x)s(x)
0.825)× v(x)13
3 geometric mean v(x)16 s(x)
14 s(x)
12
v(x)s(x)
4 — v(x)16 s(x)
14 c(x) c(x)
5 Df = 1.8 v(x)−19 s(x)
23 s(x)
12
v(x)s(x)
6 arithmetic 12
(v(x)
13 + s(x)
12
)s(x)
12
v(x)s(x)
7 Mitchell equation (5.6) s(x)12
v(x)s(x)
8 —model 5 with
s(x)12
v(x)s(x)prefactor from (5.18)
were the only models available one would conclude that the exact choice of col-
lision diameter model was not significant provided one did not assume particles
were spherical. In [157] it was noted that C2H2 addition was the dominant process
in soot growth in two flames similar to the one studied here. According to the soot
chemistry model [8, 207], the rate of C2H2 addition is proportional to particle sur-
face area, hence all the non-spherical models predict larger soot volume fractions
than the spherical particle model.
However, the results for the assumed fractal dimension, model 5, show a con-
siderable deviation from all the other collision diameter models. Significant differ-
ences are also seen for model 4 and these will be discussed below. Model 5 does
not have a noticeable effect on number density compared with the other models
(results for number density are not plotted but, after the initial spike, they differ
from the results for the geometric mean collision diameter model by less than 5%),
however, it leads to a significant reduction in surface area and in the soot volume
fraction. The reduction in soot volume fraction predicted by model 5 compared
with the other non-spherical models appears to be due to enhanced OH oxida-
tion rates. The soot chemistry model treats OH oxidation as a collision controlled
104
process with rate proportional to the square of the collision diameter of particles.
Particle collision diameters are generally much larger under model 2 than under
the other models so the OH oxidation rate is also higher. This explanation was
verified by performing simulations with model 2 modified to calculate the OH
oxidation rate with the equivalent volume sphere diameter in place of the particle
collision diameter. The soot volume fraction calculated for the modified model
was within 1% of that calculated using model 3. This illustrates the importance
of good models for particle shape in order to predict correctly reactions between
particles and the surrounding gas phase.
To test the significance of the fractal prefactor discussed in §5.3.1 model 8
was simulated. For JW10.673 the values of soot volume fraction and surface area
were found to lie between those for models 3 and 4, an increase of one third over
model 5. However, the effect on the predicted number of particles was negligible.
Additional calculations were carried out for the 10 bar laminar premixed ethy-
lene flame JW10.60 [93] and soot volume fraction results are plotted in figure 5.3.
Coupled with the very small differences in particle numbers predicted (not plotted)
by models 5 and 8, this data confirms the observation made above that the main
effect of the collision diameter model is to control the OH oxidation rate, which is
particularly high in JW10.60. Since coagulation rates are more sensitive to colli-
sion diameter in the free molecular regime [99, 245] the 1 bar flame JW1.69 [93]
from the same data set as JW10.60 and JW10.673 was simulated. This flame has
significant nucleation at all heights above the burner and therefore a continuous
supply of small particles for coagulation. For this flame particle number, shown in
figure 5.4, was much more sensitive to the change from model 5 to model 8 than
the soot volume fraction. Therefore at lower pressures and when small particles
are present, the effect of collision diameter on coagulation becomes significant.
As mentioned in §5.3.2 it is necessary to investigate the importance of the
models for particle curvature and hence results for model 4 are included in fig-
ures 5.1, 5.2 & 5.3. One sees that as the radii of curvature used are brought closer
to the collision diameter (thus making particles less smooth) the surface reaction
rates increase. In this context it is significant that the soot model [8, 207] treats the
105
0
2
4
6
8
10
12
0 1 2 3 4 5 6
model 1model 3model 4model 5model 8experimental
soot
vol
ume
frac
tion
f v / 10
-8
HAB / cm
Figure 5.3: JW10.60 soot volume fraction
106
109
1010
1011
0 0.5 1 1.5 2 2.5 3 3.5 4
model 1model 5model 8pa
rtic
le n
umbe
r de
nsity
/ cm
-3
HAB / cm
Figure 5.4: JW1.69 number density
107
rate of oxidation by OH radicals, which is the main oxidation process, as depen-
dent on the collision diameter of particles not their surface area. Since oxidation
by O2 is not a very significant process in the simulated systems [157] this means
that the main effect of increased particle surface area is an increase in the rate of
acetylene addition, which can be seen in soot volume fraction in figures 5.1&5.3.
The parallel increase in surface area as the particles get larger can be noted from
figure 5.2. Predicted soot volume fractions were computed using models 1, 3 and
4 at the top of the flames JW10.673, JW10.60, JW1.69 introduced above and also
for the flames A1 and A3 from [240]. With the exception of JW10.60 the dif-
ference between the values predicted by models 1 and 3 was at least three times
greater than the difference between the values predicted by models 3 and 4, the
same trend was observed at all distances from the burners. JW10.60 is unique
among the flames considered in having a fall in soot volume due to oxidation after
the reaction zone, but it is not entirely clear why this should make the results so
sensitive to the surface curvature model. In general, while the uncertainty about
surface curvature is a concern, it does not present a fundamental obstacle to the ap-
plication of the models proposed here to flames with weak oxidation. More work
is clearly needed to understand what happens in strongly oxidising situations.
Observed soot volume fraction results are included in figures 5.1 & 5.3 to
give some idea of the accuracy of the results. One of the main purposes of the
current work was to investigate how much of the disagreement between simulated
and observed values might be due to inadequate models of soot particle structure.
The results in this section show that, where they exist, significant disagreements
between simulated and observed results will not be resolved by new models for
particle structure alone. Such issues will have to be addressed with improved
chemistry models [156, 217], which are an active topic of research [95, 183, 219],
along with better models for particle structure.
5.4.2 Particle Size Distributions
The particle distributions simulated with model 3 at 0.4 cm, 0.6 cm and 4.2 cm
above the burner surface are shown in figures 5.5, 5.6 & 5.7. The distributions are
108
shown in two ways—as a scatter plot in surface-volume space and as a density on
the volume axis. The densities are estimates for
d
dVN(V ) (5.27)
where N (V ) is the number of particles per cm3 with volume ≤ V . They were
calculated from the discrete simulation data using a Gaussian blurring technique;
specifically, with the ‘density’ function of the computer statistics package R [162].
More details can be found in the online documentation and in [199, §5.6]. The
distributions for the other models are qualitatively the same, but with quantitative
differences at large particle sizes.
0 2 4 6 8 10
0.0
0.5
1.0
1.5
2.0
2.5
particle volume / 10-17
cm3
pa
rtic
le s
urf
ace
are
a /
10
-10cm
2
00
.51
1.5
2
vo
lum
e d
istr
ibu
tio
n d
en
sity /
10
27cm
-6
Figure 5.5: JW10.673 distribution simulated with model 3, 0.4 cm above burner
The most important feature of figures 5.5, 5.6 & 5.7 are that, in all three cases,
the distribution is concentrated on a line. From these slopes (the intercept terms
were negligible) one can infer the mean primary particle diameter by assuming
all soot particles are composed of primary particles in point contact. Diameters
calculated in this way are given in table 5.3.
109
0 2 4 6 8 10
05
10
15
particle volume / 10-16
cm3
pa
rtic
le s
urf
ace
are
a /
10
-10cm
2
01
23
45
vo
lum
e d
istr
ibu
tio
n d
en
sity /
10
25cm
-6
Figure 5.6: JW10.673 distribution simulated with model 3, 0.6 cm above burner
0.0 0.5 1.0 1.5 2.0 2.5
01
23
particle volume / 10-15
cm3
pa
rtic
le s
urf
ace
are
a /
10
-9cm
2
02
46
8
vo
lum
e d
istr
ibu
tio
n d
en
sity /
10
24cm
-6
Figure 5.7: JW10.673 distribution simulated with model 3, 4.2 cm above burner
110
Table 5.3: Primary particle diameters calculated by linear regression forJW10.673
Diameter / nmheight above burner 0.4 cm 0.6 cm 1.4 cm 3.4 cm
model 2 24 31 38 39model 3 24 31 38 40model 4 22 30 37 38model 5 24 30 37 38model 8 23 30 36 37
For each height above the burner the primary particle diameters given in ta-
ble 5.3 are quite similar (within 5% of the value for model 2), even for model 4
which leads to a noticeably different soot volume fraction (see figure 5.1). The
closeness of the values would make testing the models by comparing primary
particle diameters to those observed by electron microscopy very difficult. This
supports the view that any reasonable approximation for the aggregate structure
of soot particles is sufficient given the current relative imprecision of the chemical
mechanisms involved in soot formation and growth.
5.4.3 Individual particle behaviour
To gain an understanding of why different models might lead to similar results,
histories of particles in a small volume of gas, moving through the laminar pre-
mixed ethylene flame JW10.673 ([93]) used previously, were extracted from the
simulations. In figure 5.8 the time evolution of the shape descriptor (5.2) is plotted
for two of these particles. Figure 5.10, tracks the size of the same two particles
in the simulation, which used model 2. Figure 5.9, gives a more detailed view
of the very active early life of the particles until just after the point at which they
coagulate with each other (this point is circled on the plot).
On the more detailed plot one sees abrupt upward jumps in the ratio when the
particle undergoes coagulation followed by periods of smooth decrease as surface
growth makes the particle more round according to §5.3.2. The features seen in
111
0.65
0.7
0.75
0.8
0.85
0.1 1 10
particle Aparticle B
shap
e de
scrip
tor
height above burner / cm
x-axis region shown inmore detail in next figure.
Figure 5.8: Evolution of mean shape descriptor for JW10.673, simulated withmodel 2.
0.65
0.7
0.75
0.8
0.85
0.16 0.17 0.18 0.19 0.2 0.21 0.22
particle Aparticle B
shap
e de
scrip
tor
height above burner / cm
Figure 5.9: Detail of mean shape descriptor for JW10.673, simulated withmodel 2.
112
0
20
40
60
80
100
120
0.1 1 10
particle Aparticle B
volu
me
equi
vale
nt d
iam
/ nm
height above burner / cm
Figure 5.10: JW10.673, size history for particles from figure 5.8.
figure 5.8 are also seen in similar plots for particles simulated with other models.
All three plots show an early period of rapid activity in which the shape de-
scriptor undergoes damped oscillations and size grows steadily. The final shape
and size of the particles however, are largely determined by a few coagulation
events after the initial activity has subsided. Figure 5.10 gives a very clear view of
how chemical reactions with the surface of the soot particles all occur relatively
close to the burner face as shown by the smooth growth in particle size. Figure 5.9
illustrates the way that these surface reactions largely negate the shape changing
effects of the coagulation that occurs up to 0.5 cm. Because of this cancelling out,
a detailed model of the rapid processes may not be necessary for many purposes.
For flames similar to JW10.673 it may be sufficient to predict the outcome of the
highly active early phase of the flame, even if the details within that zone are not
resolved correctly. A modest initial peak in the mean number of primary particles
per aggregate has been found in simulations of titania nanoparticles [197], which
is consistent with a situation where some particles have just formed by aggrega-
tive collisions and others have had time for rapid surface growth and restructuring
113
to take effect. Some parametric studies of the competition between particle re-
structuring and coagulation have been reported [100], they differ from the present
work in assuming particle restructuring is a constant mass process. Sadly, these
parametric studies suggest that predicting the location of equilibrium between co-
agulation may be difficult, because the results are very sensitive to the parameters.
That the fine structure of particles remains constant after the initial period of
activity can also be seen from the primary particle diameters in table 5.3. The
mean primary particle diameter changes noticeably over the first 1.5 cm of the
flame but then hardly changes at all for the remainder of the flame. This is in ac-
cordance with figures 5.8 & 5.10 which show that surface processes which round
out particles are only significant early in the flame and further from the burner all
that happens is coagulation in which both surface and volume are conserved.
Shape Descriptor
Figure 5.11 shows the shape descriptor of the first particle to be incepted and an
average taken over the first nineteen particles to be incepted in a single simula-
tion of JW10.673 using model 2. The population average calculated over twenty
simulations is also shown. Unsurprisingly, the first particles to form undergo more
coagulation and hence develop a less spherical structure than particles which form
later.
The difference between the shape descriptor averaged over the first nineteen
particles and that averaged over the complete particle ensemble (between one and
two thousand computational particles) explains some of the features of [14, fig-
ure 3]. In that paper Balthasar and Frenklach observe that for the flame JW10.673
the average shape descriptor of their “collector particle” (one of the first particles
to be incepted, see [135] for details) is much larger than the ensemble average
calculated with their MoMIC approximation which is about 0.7 after 0.1 s and
reaches about 0.72 after 0.2 s. These results tend to confirm the accuracy of their
MoMIC approximation even though it assumes all particles have the same shape
descriptor (which changes with time) and show that their Monte Carlo results are
only applicable for the first few particles to be incepted. It should, however, be
114
0.66
0.68
0.7
0.72
0.74
0.76
0.78
0.8
0.82
0.02 0.04 0.06 0.08 0.1
first particle, 1 runfirst 19 particle average, 1 runensemble average, 10 runs
shap
e de
scrip
tor
gas residence time / s
Figure 5.11: Mean shape descriptors for JW10.673, simulated with model 2
115
noted that the flame JW10.673, for which this comparison has been performed,
has a unimodal particle size distribution and that the MoMIC tends to perform
well in such cases, but struggles with bimodal flames.
5.4.4 Future Experimental Validation
The ultimate test of the models presented in this work, is the foundational ques-
tion of modern science—does the model offer a predictive description of reality?
Three ways in which the simple models advanced in the present work might even-
tually be tested are:
• by calculating the volume, area and primary particle diameters of soot ag-
gregates using TEM [194] or SEM [39] methods to compare with the dis-
tributions generated in the present work. This would provide a very direct
test of the models, and since it could be done at different points in flames
should be able to identify the onset of any divergence between the models
and the experiments.
• by using the particle collision diameter models introduced in the present
work to calculate particle mobility according to the parameterisation of
[108, 109] and comparing the results with SMPS [122, 238] measurements.
One should be aware that with such an approach it might prove rather diffi-
cult to identify the effects of inaccuracies in flame chemistry.
• by using the particle length scale models to estimate the distribution of radii
of gyration and thus to predict scattering results [185], whether of visible
band light, x-rays [63] or neutrons [132, 241].
5.5 Summary
Predictions of simple, bivariate models of aggregate soot formation, which reflect
the non-spherical nature of soot particles, have been found to be quantitatively
different from those of a single variable model, which assumes particle sphericity.
116
Numerical investigations indicate that several collision diameter models derived
in different ways lead to very similar bivariate distributions and, in particular, the
model presented in [14] is consistent with some simple models suggested here.
Therefore, the simple particle shape models considered offer a useful starting
point for more detailed modelling of soot surface chemistry. The importance of in-
cluding particle shape is clearly illustrated by the difference between the spherical
particle model and all the extensions which conserve surface area during coagula-
tion. More work on surface curvature, to understand how particles change shape
as they undergo surface reactions and whether either the detailed aggregate model
or a bivariate model can capture this, is needed.
117
Chapter 6
Explicit Statistical Weights
This chapter introduces an alternative to the treatment of coagulation that is at
the core of the DSA and its LPDA acceleration. In chapter 4 it was seen that
the performance of LPDA was limited by the need to simulate the non-deferred
processes. In chapter 5 it was noted that all surface reaction processes could be
deferred so that only particle inception and coagulation had to be treated accord-
ing to the basic DSA. Therefore, statistically weighted particle methods, which
have been successfully used to accelerate other simulations, are developed. Two
weighted particle algorithms are implemented and validated. They are compared
with each other and with DSA/LPDA simulations that use particle doubling as a
variance reduction technique. One weighted algorithm is found consistently to
outperform the other by a significant amount. The better weighted algorithm is
found to offer broadly similar performance and accuracy to direct simulation with
particle doubling.
6.1 Background
In the basic Direct Simulation Monte Carlo (DSMC) approach to the Smolu-
chowski and Boltzmann equations, every computational particle represents the
same number of physical particles. The accuracy with which a quantity is com-
puted depends on the number of computational particles used. In spatially re-
118
solved gas simulation, regions with few physical particles are not accurately sim-
ulated. To deal with this problem Rjasanow and Wagner [169] introduced a
Stochastic Weighted Particle Method (SWPM) in which computational particles
were tagged with a statistical weight—the number of physical particles they repre-
sent. Using this technique a larger number of computational particles with smaller
statistical weights could be used in regions of low density than was the case with
basic DSMC, without increasing the time spent simulating regions of higher den-
sity. As a result, computational accuracy could be controlled separately for each
spatial cell (for example kept approximately equal at all locations) and indepen-
dently of the values of physical properties being simulated.
A similar kind of difficulty can occur in spatially homogeneous particle coag-
ulation problems. For these problems it is found that the resolution of the high end
of the particle size distribution in a DSMC simulation can be inadequate because
there are very few computational particles and thus the statistical noise is great.
A particle weighting designed for such coagulation problems was proposed by
Eibeck and Wagner [48], in which the statistical weight of a computational parti-
cle was a function of the physical particle it described and so there was no need to
explicitly tag the computational particles with their statistical weights. The par-
ticular statistical weight used was the inverse of the physical particle mass, which
has the useful consequence that each computational particle represents the same
mass of physical particles per unit volume. In systems with constant total mass
one would therefore expect the number of computational particles to remain con-
stant in mean. By some judicious algebra the Mass Flow Algorithm (MFA) was
derived so that the number of computational particles was constant in an absolute
rather than just mean sense [48].
The MFA effectively redistributes computational particles over the size space,
placing more computational particles at larger particle sizes and fewer at smaller
sizes as compared with the constant weighting DSMC method. Consequently
the high end of the particle size distribution is calculated more accurately at the
expense of the low end calculations. A more general consideration of statisti-
cal weights as functions of the physical particle properties was undertaken by
119
Kolodko and Sabelfeld [97], whose comparisons included the special case of the
Mass Flow Algorithm (MFA) [48], but the greater generality came at the price of
coagulation events that sometimes increased and sometimes decreased the number
of computational particles.
Wells and Kraft [214] applied the MFA to a coagulation (and sintering) prob-
lem in nanoparticle dynamics. Consideration of systems of engineering interest
inevitably led to a desire to simulate systems which exchange mass with their
surroundings and thus a MFA cannot be expected to maintain a constant num-
ber of computational particles, even in mean, without continual resampling of
the distribution. Nevertheless in [138, 139] MFA was successfully extended to
systems with particle inception and where individual particle masses changed by
interaction with the environment. The treatment of this last class of processes—
surface growth—is set out in [139]. In essence, surface growth was simulated as
two separate processes. In one, the mass of physical particles referred to by a
computational particle is changed and in the other, new computational particles
are introduced to the system to account for the increase in mass. Analogous pro-
cesses which remove mass from the system would be treated by having the second
process remove computational particles.
In [36], where the physical motivation was atmospheric aerosols, computa-
tional particles were assigned weights that were not simply functions of the phys-
ical particles they represented. Using operator splitting and deterministic integra-
tion, surface processes were treated by updating the statistical weight of a compu-
tational particle so that it represented the same number of physical particles both
before and after the surface events. Coupled with the MFA approach to coagu-
lation this meant that the only change in the number of computational particles
during simulations was due to processes of particle inception.
Maintaining a constant number of computational particles or at least main-
taining a lower bound, is essential for controlling the variance of the Monte Carlo
sample solutions [97]. This issue is discussed in §2.3.1, where the procedure of
ensemble doubling, used in the previous chapters of this thesis to avoid catas-
trophic losses of precision, is introduced. The work of Eibeck and Wagner [48],
120
referred to above, took the more indirect and mathematical approach of formu-
lating a new stochastic process that could solve the same coagulation problems
as before, but without the drop in computational particle numbers associated with
basic direct simulation. Variance control was also addressed by Haibo et al. [76],
who introduced statistical weights to arrive by a slightly more informal route at
the ‘w2’ method (independently) derived in this chapter. The same authors have
introduced a further, similar method for the same purpose [242].
6.2 General Approach
The work in this chapter is aimed, like the rest of this thesis, at solving soot for-
mation problems in laminar premixed flames according to the model in §1.4. Pre-
vious chapters have established the power of the LPDA for the surface reactions
in the soot model. This chapter therefore starts from the basic LPDA formulation
of the soot population problem (1.4), first given in (3.9) and repeated here—find
measures λt to solve
d
dt
∫x∈E
φ (x) λt (dx) =
∫x∈E
φ (x) It (dx)
+∑l∈U ′
∫x,ξ∈E
[φ(g(l) (ξ)
)− φ (x)
]β
(l)t (ξ)P (Rtx = dξ) λt (dx)
+1
2
∫x,y,ξ,ζ∈E
[φ (ξ + ζ)− φ (x)− φ (y)]Kt (ξ, ζ)
P (Rtx = dξ)P (Rty = dζ) λt (dx) λt (dy) .
(6.1)
Progress is then made by exploiting weighted particle methods within this frame-
work. The use of the update operator from equation (3.6) in order to make mea-
sures such as λ physically interpretable is implicit in the following sections.
121
6.3 Weighting
Let W be a set and Ω : W → R+ a map that defines how many physical particles
an element of W indicates. In practice W = R+ and Ω = id would seem to be the
natural choices. LetW be a σ−algebra on W which makes Ω measurable. Note
that it is not possible to talk about statistical weights at this stage since there are
no statistics—(3.9) is a deterministic equation. However, one can look for maps ν
analogous to λ but taking values that are measures on W × E and satisfying∫W×E
Ω (w)φ (x) νt (dw, dx) =
∫E
φ (x)λt (dx) (6.2)
for all t ∈ [0, T ].
It is convenient to extend the Rt to act on W × E and to extend the It to be
measures on the product σ−algebra on W × E. The extensions, Rt and It must
have the same physical interpretation as the original forms, so one requires∫E
φ (x) It (dx) =
∫W×E
φ (x) Ω (u) It (du, dx) (6.3)
and
Ω (u)
∫E
φ (ξ)P (Rtx = dξ) =
∫W×E
Ω (w)φ (ξ)P(Rt (u, x) = (dw, dξ)
)∀ (u, x) ∈ W × E. (6.4)
The simplest form for Rt is
Rt (u, x) = (u,Rtx) (6.5)
and this form will be used throughout this work. It is clearly the most natural
extension—R represents the effects of processes that act on individual particles
independently and therefore would not be expected to alter the number of physical
particles being modelled.
122
Extensions of1 β and K so that their final arguments can be measures on W ×E rather than just E are also needed. If ν is a measure on W × E then let λ be the
measure on E given by
λ (A) =
∫W×A
Ω (u, x) ν (du, dx) ∀A ∈ E (6.6)
and
β (x; ν) := β (x;λ) , (6.7)
K (x, y; ν) := K (x, y;λ) . (6.8)
A simple extension of It is also possible:
It (du, dx) :=δw (du)
Ω (w)It (dx) . (6.9)
This approach will be used in this chapter but it should be noted that w may be
chosen in arbitrary ways, for example varying with time and with x [170].
6.3.1 Dynamics of the New Measure
To proceed to a stochastic particle algorithm one requires the dynamics of ν from
(6.2). Consider
ψ : W × E → R; (w, x) 7→ Ω (w)φ (x) (6.10)
so ∫W×E
ψ (w, x) νt (dw, dx) =
∫E
φ (x)λt (dx) ∀ t (6.11)
and in particular
d
dt
∫W×E
ψ (w, x) νt (dw, dx) =d
dt
∫E
φ (x)λt (dx) . (6.12)
By expanding the right hand side of (6.12) according to (3.9), repeatedly sub-
1The labels (l) on β and g will be omitted from the following presentation to avoid the notationbecoming too cluttered.
123
stituting (6.11) and assuming νt ((u, x) : Ω (u) = 0) = 0 ∀ t one has
d
dt
∫W×E
ψ (u, x) νt (du, dx) = J1 + J2 + J3 (6.13)
where
J1 :=
∫(u,x)∈W×E
ψ (u, x) It (du, dx) , (6.14)
J2 :=
∫(W×E)
2
[ψ (w, g (ξ))
Ω (w)− ψ (u, x)
Ω (u)
]βt (ξ; νt) Ω (u)
P
(Rt (u, x) = (du′, dξ)
)νt (du, dx)
(6.15)
and
J3 :=1
2
∫(W×E)
4
[ψ (w, ξ + ζ)
Ω (w)− ψ (u, x)
Ω (u)− ψ (v, y)
Ω (v)
]K (ξ, ζ, νt) Ω (u) Ω (v)P
(Rt (u, x) = (du′, dξ)
)P
(Rt (v, y) = (dv′, dζ)
)νt (du, dx) νt (dv, dy) .
(6.16)
In (6.15) & (6.16) the variables w appear as free parameters so (6.13) holds
whatever values of w (provided Ω (w) > 0) are substituted into J2 and J3. There-
fore one may choose the rules for calculating the w from the u′ and v′ to opti-
mise the stochastic particle algorithm that is being derived. To derive such an
algorithm one wants to express all the integrands in (6.14)–(6.16) in the form∑i ψ (ui, xi)−
∑j ψ (vj, yj) multiplied by some rate expression and to integrate
over all the (vj, yj). The differences of sums become the definitions of the jumps
of the particle algorithm and the rest of the integral becomes the jump rate. The
variables u′ and v′ found in (6.14)–(6.16) are dummy variables, which represent
the weight component of Rt (u, x) and Rt (v, y) respectively.
124
As stated in (6.5) u′ = u and v′ = v with probability 1 so (6.15) can be
simplified by choosing w = u′ = u to give
J2 =
∫E×(W×E)
[ψ (u, g (ξ))− ψ (u, x)] βt (ξ, νt)P (Rtx = dξ) νt (du, dx) .
(6.17)
There is more than one way to proceed from (6.16), two approaches that lead
to different simulation procedures are given here. Both, by exploiting symmetry
in (u, x) and (v, y), yield, for their respective definitions of w,
J3 =
∫E2×(W×E)
2[ψ (w, ξ + ζ)− ψ (u, x)] K (ξ, ζ, νt) Ω (v)
P (Rtx = dξ)P (Rty = dζ) νt (du, dx) νt (dv, dy) .
(6.18)
The first approach is to choose w such that
1
Ω (w)=
1
Ω (u)+
1
Ω (v). (6.19)
(This is a possible choice for at least the simplest choices of W and Ω—consider
W = R+ with Ω a positive multiple of the identity then w = (u−1 + v−1)
−1.) The
second approach is to choose w such that
Ω (w) =Ω (u)
2, (6.20)
which is also possible if W = R+ and Ω is a positive multiple of the identity. The
idea of halving the statistical weight has previously been used by Haibo et al. [76]
as mentioned in §6.1.
125
6.3.2 Simulation Algorithms
All the equations considered above can be thought of as deterministic, mean field
equations [4]. Stochastic algorithms are derived (see, for example, [164, §4.6])
by taking (6.13) as defining the generator of a Markov process [90, ch 19], after
introducing a scaling parameter to control the level of discretisation. The relation-
ship of pure jump Markov processes to the mean field equations has been exten-
sively studied in papers such as [35, 49–51, 74, 147] and their references. These
and other papers typically prove that the trajectories of a sequence of pure jump
Markov processes converge in some sense to a solution of the deterministic mean
field equation. The work reported in the present chapter was conceived as a more
practical attempt to improve a computer program already in use for solving soot
problems in chemistry and engineering. As such, investigation of convergence is
confined to numerical tests.
In this work the discretisation level or scaling may be controlled through It as
follows: Choose a sequence (wN) in W such that Ω (wN) > 0 and Ω (wN) 0;
define
INt (du, dx) :=
δwN(du)
Ω (wN)It (dx) . (6.21)
Then replacing I with IN and substituting (6.18)&(6.17) for J2 and J3 in (6.13)
leads to a sequence of equations
d
dt
∫W×E
ψ (u, x) νt (du, dx) =
∫(u,x)∈W×E
ψ (u, x) INt (du, dx)
+
∫(W×E)
2[ψ (u, g (ξ))− ψ (u, x)] βt (ξ, νt)
P
(Rt (u, x) = (du′, dξ)
)νt (du, dx)
+
∫(W×E)
4[ψ (w, ξ + ζ)− ψ (u, x)] K (ξ, ζ, νt) Ω (v)P
(Rt (u, x) = (du′, dξ)
)P
(Rt (v, y) = (dv′, dζ)
)νt (du, dx) νt (dv, dy) .
(6.22)
126
A sequence of generators for pure jump Markov processes can then be derived
from (6.22) in the standard way. The rates of the events which make up the
Markov processes are given in table 6.1. In table 6.1, and for the rest of the
chapter, W = R+ and Ω is the identity mapping. Under reasonable conditions,
the trajectories of these processes can be expected, as N → ∞, to converge to
solutions of the mean field problem. Throughout the remainder of the chapter
Table 6.1: Process summary
−→ (wN , x) INt ((wN , x))
(u, x) −→ (u, g (ξ)) P
(Rt (u, x) = ξ
)β (ξ) ν ((u, x))
(u, x) −→ (w, ξ + ζ)K (ξ, ζ, ν)P (Rtx = ξ)P (Rty = ζ)
×ν ((u, x)) ν ((v, y))
the two rules for calculating w for coagulation products will be denoted ‘w1’ and
‘w2’ as specified in table 6.2.
Table 6.2: Coagulation weight rules
w1 w−1 = u−1 + v−1
w2 w = u/2
6.4 Numerical Tests
6.4.1 Initial Validation
Validation of the new weighted algorithms began using problems from chapter 2,
for which the entire particle size distribution can be calculated using an ODE
solver. Direct solution of the population balance equations is possible because all
but the first few thousand size classes can be neglected. Initial tests simulated only
inception and coagulation processes, specifically the conditions were
127
100
104
106
108
1010
0 500 1000 1500 2000 2500 3000
ODE solutionw1w1 modified infloww2
part
icle
num
ber
dens
ity /
cm-3
particle size / # C atoms
Figure 6.1: PSD for coagulation and inception test problem
1. physical particle inception rate It = 2.63× 1013 ×(
0.05−t0.05
)2 cm−3 s−1,
2. constant temperature of 500 K
3. constant pressure of 600 bar.
The particle size distributions are presented in figure 6.1 and show good agreement
between the ODE solver results and all the stochastic weighted particle meth-
ods. Both rules for the weight of the coagulation product (see §6.3.2, table 6.2)
were used with computational particle inflow proportional to the inception rate
128
for physical particles, that is, with w from (6.9) constant. These two data sets
are labelled ‘w1’ and ‘w2’ in the legend of figure 6.1. In the nucleation and co-
agulation only case considered here, w1 corresponds to the mass flow algorithm
of [49]. The w1 coagulation rule was also tested with a modified inflow rule; in
this case the rate of inflow of computational particles was constant throughout the
simulation, but the statistical weight of the new computational particles was au-
tomatically adjusted to simulate the physical particle inception rate. Adjustment
of the statistical weight of particles entering a system has previously been used in
simulations of the Boltzmann equation [170].
The statistical noise associated with the methods was also considered. Fig-
ure 6.2 summarises the results. The 95% confidence interval sizes were calcu-
lated from a central limit theorem estimate based on 30 realisations of the Markov
chain for each simulation method, each realisation used just under 216 computa-
tional particles. The results in figure 6.2 indicate that w2 is generally more noisy
than w1 and that there is little difference between the two inflow methods used
with w1. The data is an initial indication that w1 is to be preferred to w2 since
fewer realisations of the w1 Markov chain than of the w2 Markov chain would be
needed to obtain the same size of confidence interval.
Further comparisons to the particle size distribution produced by the ODE
solver were performed for a test problem including a surface reaction (pyrene
condensation). Good agreement was found between the weighted particle meth-
ods and the deterministic solution of the size distribution for the limited case for
which this was possible.
6.4.2 LPDA and real flames
Having established that the algorithms and their implementations worked cor-
rectly, in limited cases for which the population balance equations could be solved
directly, testing moved on to premixed laminar flames. For these tests the LPDA
as described in chapter 3 was employed throughout, including for the DSA calcu-
lation used to provide a reference solution. The first flame based test compared
the accuracy of the moments of the soot particle size distribution calculated with
129
0.01
0.1
1
10
0 500 1000 1500 2000 2500 3000
w1w1 modified infloww295
% c
onfid
ence
inte
rval
hal
f wid
th /
%
particle size / # C atoms
Figure 6.2: Statistical noise for coagulation and inception test problem
130
the w1 and w2 weightings for the flame JW10.68 [93]. The second moment of the
mass distribution is shown in figure 6.3, 95% confidence intervals for the DSA
and w1 data are within ±2% of the plotted values and so confidence intervals are
only shown for w2. The calculations for the weighted algorithms were performed
0
5
10
15
20
0 1 2 3 4 5 6 7
DSAw1w2 meanw2 confidence bounds
seco
nd m
omen
t of m
ass
dist
ribut
ion
/ 10-2
2 g2 c
m-3
height above burner / cm
Figure 6.3: Second moment of JW10.68 mass distribution
with 30 runs using around 2000 computational particles from the end of the incep-
tion peak until the end of each simulation. To ensure no error was introduced by
the deferral of processes all computational particles were updated every time the
simulation covered 5 × 10−4 s of real time. The large difference in the statistical
variability generated by the two weighting methods should be observed. The w2
method leads to a variance for the second moment of the size distribution that is
more than 10 times larger than that obtained with the w1 method. The situation
with the zeroth and first moments is similar.
131
The real attraction of stochastic particle methods is that they provide an ex-
plicit estimate of the particle size distribution. As a test case for the distribution
the flame JW1.69 [93] was used. This flame is known to have a bimodal particle
size distribution [16] and therefore to present an interesting test case for the way
in which the w1 weighting method transfers computational effort to larger particle
sizes. The w2 method was not used for this comparison, because the results above
suggest that far more realisations of the w2 Markov chain would be required than
of the w1 Markov chain in order to achieve the same precision. Therefore, to meet
any particular error tolerance, less computer time would be required using the w1
method.
Densities were calculated using the statistical computation package R [162]
by performing Gaussian blurring of the observations with a bandwidth of 0.0245.
The densities presented here were calculated in logarithmic size space, that is,
they are (estimates of)d
dlog10 xN (log10 x) (6.23)
where N (log10 x) is the number of particles per cubic centimetre comprising no
more than x Carbon atoms. Data calculated from 50 repetitions of the w1 Markov
chain with just under 213 computational particles are compared to data from high
precision DSA (without LPDA) calculations which used 30 repetitions with be-
tween 216 and 216 computational particles.The results in figure 6.4 show a very
high degree of agreement between the two algorithms, these particular data apply
to the top of the flame—about 4.2 cm above the burner.
In figure 6.4 large oscillations in the density generated from the w1 data can
be seen for particle sizes between 300 and 10,000 carbon atoms. These oscilla-
tions are a symptom of the way the weighted algorithm transfers computational
resolution to large particle sizes as discussed in the next paragraph and illustrated
in figure 6.5. The weighted method clearly would be rather unsuitable for this
flame if the number of particles containing 300–10,000 carbon atoms was the
main quantity of interest. In such a case a DSA method with particle doubling
[110, 114] as a variance reduction technique should be used if possible. However,
as will be seen for other measures of solution accuracy, the weighted method can
132
105
106
107
108
109
1010
1011
100 1000 104 105 106 107 108
dsaw1
dens
ity o
f par
ticle
num
ber
dist
ribut
ion
/ cm
-3
particle size / # C atoms
Figure 6.4: Physical particle size distribution for JW1.69
133
offer as good or better performance than the un-weighted alternative.
It is also interesting to look at the distribution of computational particles on the
size spectrum, since the number of particles is what controls the precision of the
calculation. In figure 6.5 the normalised densities of the computational particle
distribution for the calculations used for figure 6.4 are plotted. The normalisation
ensures that the area under both the w1 and the DSA curve is 1 (when integrated
against d (log10 x)) and so there are no effects due to the different numbers of
computational particles used with the two algorithms. Figure 6.5 gives a very
0
1
2
3
4
5
6
7
10 100 1000 104 105 106 107 108
dsaw1
norm
alis
ed d
ensi
ty
particle size / # C atoms
Figure 6.5: Relative computational particle distribution for JW1.69
clear view of the way in which w1 and DSA concentrate computational effort on
different parts of the size range. This shows that the choice of algorithm will de-
pend, to some extent, on the problem that is being solved. Problems that mainly
concern the largest particles are likely to be addressed best using a weighted algo-
rithm, problems concerning the smallest particles should be tackled with a DSA.
The remainder of this chapter attempts to investigate this choice in a quantitative
134
way.
6.4.3 Performance
Simulations of the flame HWA3 [240] were performed using DSA and the w1
weighting. The first set of tests reported ignored the acetylene, OH and O2 sur-
face reactions since simulation of these processes takes a significant amount of
time and is the same whether or not weighted particles are used. The results
obtained in this way give no information about the soot produced by the flame
but provide a comparison that should focus a little more on the properties of the
weighted algorithms. Simulation size is described by the maximum number of
computational particles in the simulation, settings were chosen so that most of
this capacity was used. The initial sample volume (DSA) and the weighting in Itwere chosen to use almost all the capacity of the binary tree, in which the com-
putational particles were stored, at the point when the physical particle number
peaked. For DSA particle doubling [157] was used so that the tree was never less
than 50% full. For the weighted methods doubling was not needed as the number
of computational particles did not decrease, this being one of the attractions of the
weighted method, see (6.18).
Table 6.3 summarises performance on the simplified flame; run times were
measured on the same desktop PC which has a 2 GHz Athlon XP CPU (2400+).
The memory requirement of the simulations are low, only a few MB are required,
even for the largest simulations. In table 6.3 the standard deviation of the popula-
tion of samples for certain functionals of the solution are given for 1.34 cm above
the burner, which is approximately the end of the flame. The functionals used are
the zeroth, first, second, third moments of the mass distribution (denoted m0, m1,
m2, m3 respectively) and the number of particles containing 5000–6000 Carbon
atoms.
From table 6.3 one sees that, for a given tree size, DSA simulations take
around 75% of the computational time. For the zeroth moment DSA yields an
estimate that is only half as variable as the w1 approach. However, the advantage
of DSA drops as one moves to higher moments and by the third moment DSA
135
Table 6.3: Variability of algorithms for reduced HWA3
tree method time per population std. dev. /%size run / s m0 m1 m2 m3 5–6×103
212 w1 0.6 9.7 6.4 15.0 25.4 11.1214 w1 2.5 4.4 3.6 7.7 12.0 6.5216 w1 10.4 3.0 1.8 3.9 6.0 2.9212 DSA 0.5 5.4 4.2 14.6 34.9 24.4214 DSA 1.9 2.7 2.3 7.6 19.5 11.7216 DSA 7.6 1.5 1.1 4.0 9.6 7.4
is significantly inferior to w1. The value of the higher moments is primarily de-
termined by the larger particles in the distribution and it is not surprising the w1
offers an advantage in this case since it increases the computational resolution for
this part of the distribution. The variance in the estimates of the number of par-
ticles containing 5000–6000 Carbon atoms is an even more extreme example of
the way in which w1 resolves the larger sizes better (at the expense of the smaller
ones) compared with DSA.
For all the functionals both algorithms appeared to have converged in mean to
the true value for the largest tree sizes reported in table 6.3. This was verified by
performing larger simulations on other hardware, which were not timed, and so
are not reported in detail. These results suggest that, for some functionals, which
heavily emphasize the distribution of larger particles w1 offers a faster way of
getting good estimates than DSA.
6.4.4 Further Comparison
The same tests were carried out, with the same flame, HWA3, but including all re-
actions on the surfaces of soot particles by means of the LPDA. A couple of results
for w2 are included for interest but fit the pattern discussed above and will not re-
ceive any further attention. Stochastic simulation of this flame is of considerable
interest because measured particle size distributions have been published in [240].
Computation times are not comparable with those from table 6.3 because different
136
hardware was used. The simulations with the full flame model take much longer
because of the high rates of surface reactions and so Opteron 252 processors run-
ning at 2.6 GHz in 64 bit mode were used for the computations. The results are
summarised in table 6.4.
Table 6.4: Variability of algorithms for physical system
tree method time per population std. dev. /%size run / s m0 m1 m2 m3 9–10×105
212 w1 4.4 9.6 1.9 3.8 6.2 18.9214 w1 18.0 5.2 1.1 2.3 3.6 9.8216 w1 73.7 2.4 0.6 1.3 2.1 4.9218 w1 274 1.1 0.2 0.4 0.7 2.1214 w2 18.9 5.3 3.3 2.3 7.5 36.1216 w2 75.1 2.2 1.5 3.4 5.7 16.8212 DSA 3.0 4.6 1.1 2.9 7.1 39.1214 DSA 12.2 3.0 0.5 1.4 3.3 19.6216 DSA 49.5 1.3 0.3 0.6 1.8 9.3218 DSA 213 0.7 0.1 0.4 1.0 5.3
In common with the results for the simplified problem DSA is seen to be
around one third faster than the w1 approach for a given tree size. All the sets of
simulations produced reasonably accurate estimates of the quantities considered
in table 6.4: For the moments, the mean from 80 repetitions with each tree size
was within 1% of the values from extremely high precision calculations. For the
number of particles containing 9–10×105 C atoms, the difference between the
mean and the high precision solution just reached 4% in some cases, which is
not statistically significant. As in the simplified case DSA generates less statis-
tically noisy estimates for the first few moments of the size distribution but w1
becomes more attractive for functionals that place a greater stress on the largest
sizes of particles. However, for the reduced case, w1 was significantly less noisy
for the third moment (m3) than DSA for a given tree size, but for the full flame
the crossover is only just beginning at m3. It can be seen that for the number of
particles in the size range 9–10×105 the w1 algorithm produces an estimate with
137
roughly the same variance as the DSA with 4 times the number of particles. Ex-
amination of the ‘time per run’ column of table 6.4 shows that w1 can therefore
provide an estimate, of any given precision, of the number of particles in the size
range 9–10×105 in roughly one third of the time of DSA.
6.4.5 Potential Applications
While the weighted particle algorithm presents an alternative to the DSA with
particle doubling for simulations of Smoluchowski’s coagulation equation for spa-
tially homogeneous systems, it also has other potential applications where it might
clearly distinguish itself from the DSA. One application, which has already re-
ceived considerable attention for the purposes of gas dynamics simulation, is the
simulation of particle populations in a grid of cells. For such problems that ability
to control the weighting explicitly is important to capture effects in regions with
low particle densities [91], when a small proportion of the population has very im-
portant effects [170]. Weighted particle methods would therefore seem attractive
for simulations of spatially resolved coagulating systems, a purpose for which the
MFA has already been used [73].
Explicit weights also simplify the implementation of particle transport be-
tween cells by accounting automatically for differences in the statistical weight
assigned to computational particles in different cells and facilitating conservative
resampling of particle populations [200]. It is also possible to exploit weighting
to adjust computational resolution independently of the main kinetic simulation
process by resampling the computational particle population. An application of
resampling would be to construct a different computational resolution profile from
the one shown in figure 6.5 in order to move the statistical noise seen at particle
sizes in the range 300–10,000 carbon atoms in figure 6.4 to a different size range.
6.5 Summary
A general weighted form of the Smoluchowski coagulation equation with addi-
tional linear terms has been formulated. From this equation two new weighted
138
particle simulation algorithms have been derived. Implementations of these algo-
rithms has been successfully tested on a range of problems and one of the algo-
rithms has been found to be up to three times faster than direct simulation.
139
Chapter 7
Conclusion
Finally, a short review of the work in this thesis is presented and a suggestion
made regarding how to make further progress in understanding soot formation
and growth.
In chapter 2 a general Direct Simulation Monte Carlo (DSMC) algorithm for
the simulation of soot formation at all pressures was developed and shown to
produce accurate results by comparisons with results from a standard ODE solver.
Detailed investigations have been carried out into the computational demands of
this DSMC algorithm, which showed that most of the computation time was spent
using complicated techniques, designed for the non-linear coagulation process, to
simulate linear surface growth. To solve the problem thus revealed, a modified
stochastic algorithm (LPDA) was formulated, in which surface reaction processes
were deferred. The generator of the corresponding Markov chain was presented
for the first time in chapter 3. The LPDA was found to be superior to operator
splitting methods and tests show it accelerates computations for flames of physical
interest by a factors of up to one thousand, while causing little loss of accuracy.
Attempts to achieve further accelerations by deterministic approximations to
the surface reaction processes in chapter 4 yielded little additional progress. As
shown in that chapter, the computational bottleneck in LPDA is the un-accelerated
un-deferred processes. Therefore, further accelerations will only be possible with
a new approach to the processes which the LPDA cannot accelerate. An exam-
140
ple of a possible approach using statistically weighted computational particles is
given in chapter 6. The weighted algorithm was seen to have some potentially
useful properties and found to offer similar performance to the un-weighted direct
simulation algorithm with the particle doubling variance reduction technique as
used in this thesis.
The LPDA is well suited to model development, because the addition of new
features to particle models requires no change to the basic programme structure
and has little effect on computational cost. These properties have been exploited
by colleagues [179] and a detailed demonstration was given in chapter 5. In that
chapter it was shown that simple models for soot particle shape are quantitatively
similar and offer qualitative improvements over older models which treat all soot
particles as spherical. The chapter also highlights the need for a better understand-
ing of the details of the chemical reactions on the surfaces of soot particles and
work on this topic was briefly reviewed in §1.2.1.
Progress in the future would be greatly aided by detailed and extensive ex-
perimental studies of a small number of systems. Quality rather than quantity of
experimental data is needed for thorough calibration and validation of the detailed
models that can now be tested with Monte Carlo simulations.
141
Acknowledgements
The author is grateful for the support of his supervisor, Dr M Kraft, and other
colleagues, who provided the foundations on which this thesis is built. He also
wishes to thank his parents for their dedication in correcting the drafts of the
thesis, a task in which Messrs M S Celnik and M H Sankey also kindly assisted.
Chapter 5 is based on an article, which completed the peer review process
as this thesis was being completed. A considerable amount of valuable advice
was received from the anonymous reviewers. Discussions with Mr N M Morgan
were very helpful in formulating the connection between the weighted geometric
average collision diameters and assumed fractal dimension models in §5.3.1.
Discussions with Dr W Wagner were very helpful in developing the work
reported in chapter 6.
Finally, the author thanks God for making such an interesting world for him
to investigate.
142
List of Figures
2.1 Size distributions calculated with ODE solver and stochastically. . 41
2.2 Comparison of calculations and observations for flames. . . . . . 43
2.3 4 level binary tree . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.4 Simulation run time scaling with binary tree . . . . . . . . . . . . 52
2.5 % of execution time spent on different tasks—DSA . . . . . . . . 53
3.1 Number density and second mass moment for JW1.69 . . . . . . . 61
3.2 Particle size distribution at end of flame JW10.68 . . . . . . . . . 64
3.3 Percentage of execution time spent on different tasks with ‘pv’
method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.4 LPDA applied to JW1.69 . . . . . . . . . . . . . . . . . . . . . . 74
3.5 Particle distribution for JW10.68 . . . . . . . . . . . . . . . . . . 76
4.1 % of execution time spent on different tasks—LPDA with ‘pv’ . . 87
5.1 JW10.673 soot volume fraction . . . . . . . . . . . . . . . . . . . 103
5.2 JW10.673 particle surface area concentration . . . . . . . . . . . 103
5.3 JW10.60 soot volume fraction . . . . . . . . . . . . . . . . . . . 106
5.4 JW1.69 number density . . . . . . . . . . . . . . . . . . . . . . . 107
5.5 JW10.673 distribution simulated with model 3, 0.4 cm above burner109
5.6 JW10.673 distribution simulated with model 3, 0.6 cm above burner110
5.7 JW10.673 distribution simulated with model 3, 4.2 cm above burner110
5.8 Evolution of mean shape descriptor for JW10.673, simulated with
model 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
143
5.9 Detail of mean shape descriptor for JW10.673, simulated with
model 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
5.10 JW10.673, size history for particles from figure 5.8. . . . . . . . . 113
5.11 Mean shape descriptors for JW10.673, simulated with model 2 . . 115
6.1 PSD for coagulation and inception test problem . . . . . . . . . . 128
6.2 Statistical noise for coagulation and inception test problem . . . . 130
6.3 Second moment of JW10.68 mass distribution . . . . . . . . . . . 131
6.4 Physical particle size distribution for JW1.69 . . . . . . . . . . . 133
6.5 Relative computational particle distribution for JW1.69 . . . . . . 134
144
List of Tables
2.1 Scaling of run times with tree depth . . . . . . . . . . . . . . . . 50
2.2 Illustrative results . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.3 Relative frequency of stochastic events in JW1.69 . . . . . . . . . 52
3.1 Detail of operator splitting runs for JW1.69 . . . . . . . . . . . . 60
3.2 Relative frequency of stochastic events in JW10.68 . . . . . . . . 62
3.3 Detail of operator splitting runs for JW10.68 . . . . . . . . . . . . 63
3.4 Detail of deferred surface growth runs for JW10.68 . . . . . . . . 75
3.5 Comparison of simulation methods . . . . . . . . . . . . . . . . . 77
3.6 Time to achieve tolerances for JW1.69 . . . . . . . . . . . . . . . 77
4.1 Numerical errors in JW10.68 distribution moments . . . . . . . . 83
4.2 Run times in seconds for different algorithms on the same hardware 83
4.3 Numerical errors in JW10.68 distribution moments . . . . . . . . 84
4.4 Numerical errors in JW10.68 distribution moments . . . . . . . . 88
4.5 Run times in seconds for different sub-algorithms on the same
hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.1 Length scales for typical JW10.673 particle sizes . . . . . . . . . 102
5.2 Summary of simulated models . . . . . . . . . . . . . . . . . . . 104
5.3 Primary particle diameters calculated by linear regression for JW10.673111
6.1 Process summary . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.2 Coagulation weight rules . . . . . . . . . . . . . . . . . . . . . . 127
6.3 Variability of algorithms for reduced HWA3 . . . . . . . . . . . . 136
145
6.4 Variability of algorithms for physical system . . . . . . . . . . . . 137
146
Bibliography
[1] Computational Modelling Group. URL http://como.cheng.
cam.ac.uk/index.php?Page=Resources\&Section=
SootDatabase. Cambridge Soot Database.
[2] G. L. Agafonov, M. Nullmeier, P. A. Vlasov, J. Warnatz, and I. S. Zaslonko.
Kinetic modeling of solid carbon particle formation and thermal decompo-
sition during carbon suboxide pyrolysis behind shock waves. Combust. Sci.
Tech., 174:185–213, 2002.
[3] K. Ahmad. Pollution cloud over south Asia is increasing ill health. Lancet,
360:549, 2002.
[4] D. J. Aldous. Deterministic and stochastic models for coalescence (aggre-
gation and coagulation) : a review of the mean-field theory for probabilists.
Bernoulli, 5, 1999.
[5] A. Alexiadis, M. Vanni, and P. Gardin. Extension of the method of moments
for population balances involving fractional moments and application to a
typical agglomeration problem. J. Colloid Interf. Sci., 276:106–112, 2004.
doi:10.1016/j.jcis.2004.03.052.
[6] V. Alopaeus, M. Laakkonen, and J. Aittamaa. Solution of popula-
tion balances with breakage and agglomeration by high-order moment-
conserving method of classes. Chem. Eng. Sci., 61:6732–6752, 2006.
doi:10.1016/j.ces.2006.07.010.
147
[7] B. Apicella, A. Carpentieri, M. Alfe, R. Barbella, A. Tregrossi, P. Pucci,
and A. Ciajolo. Mass spectrometric analysis of large PAH in a fuel-rich
ethylene flame. Proc. Combust. Inst., 31(1):547–553, 2001.
[8] J. Appel, H. Bockhorn, and M. Frenklach. Kinetic modeling of soot forma-
tion with detailed chemistry and physics: Laminar premixed flames of C2
hydrocarbons. Combust. Flame, 121:122–136, 2000.
[9] J. Appel, H. Bockhorn, and M. Wulkow. A detailed numerical study of
the evolution of soot particle size distributions in laminar premixed flames.
Chemosphere, 42, 2001. doi:10.1016/S0045-6535(00)00237-X.
[10] C. Arden Pope III and D. W. Dockery. Health effects of fine particulate
air pollution: Lines that connect. J. Air Waste Mgmt. Assoc., 56:709–742,
2006.
[11] C. Artelt, H. J. Schmid, and W. Peukert. On the relevance of accounting
for the evolution of the fractal dimension in aerosol process simulations. J.
Aerosol Sci., 34:511–534, 2003.
[12] G. A. Athanassoulis and P. N. Gavriliadis. The truncated Hausdorff mo-
ment problem solved by using kernel density functions. Probablist. Eng.
Mech., 17(3):273–291, 2002. doi:10.1016/S0266-8920(02)00012-7.
[13] I. Ayrancıa, R. Vaillon, N. Selsuka, F. Andre, and D. Escudie. Determina-
tion of soot temperature, volume fraction and refractive index from flame
emission spectrometry. J. Quant. Spectrosc. Ra., 104(2):266–276, 2007.
[14] M. Balthasar and M. Frenklach. Detailed kinetic modeling of soot aggre-
gate formation in laminar premixed flames. Combust. Flame, 140:130–145,
2005.
[15] M. Balthasar and M. Frenklach. Monte-Carlo simulation of soot particle
coagulation and aggregation: the effect of a realistic size distribution. Proc.
Combust. Inst., (30):1467–1475, 2005.
148
[16] M. Balthasar and M. Kraft. A stochastic approach to solve the particle size
distribution function of soot particles in laminar premixed flames. Combust.
Flame, 133:289–298, 2003.
[17] M. Balthasar, M. Kraft, and M. Frenklach. Kinetic Monte-Carlo simula-
tions of soot particle aggregation. American Chemical Society Preprint
Papers, 50(1):135–136, 2005.
[18] A. C. Barone, A. D’Alessio, and A. D’Anna. Morphological characteri-
zation of the early process of soot formation by atomic force microscopy.
Combust. Flame, 132:181–187, 2003.
[19] F. Barra, B. Davidovitch, A. Levermann, and I. Procaccia.
Laplacian growth and diffusion limited aggregation: Differ-
ent universality classes. Phys. Rev. Lett., 87(13):134501, 2001.
doi:10.1103/PhysRevLett.87.134501.
[20] J. C. Barrett and N. A. Webb. A comparison of some approximate methods
for solving the aerosol general dynamic equation. J. Aerosol Sci., 29(1/2):
31–39, 1998. doi:10.1016/S0021-8502(97)00455-2.
[21] G. A. Bird. Molecular gas dynamics. Clarendon press, 1976.
[22] G. A. Bird. Approach to translational equilibrium in a rigid sphere gas.
Phys. Fluids, 6(10):1518–1519, 1963.
[23] Y. Bouvier, C. Mihesan, M. Ziskind, E. Therssen, C. Focsa, J. F. Pauwels,
and P. Desgroux. Molecular species adsorbed on soot particles issued from
low sooting methane and acetylene laminar flames: A laser-based experi-
ment. Proc. Combust. Inst., 31(1):841–849, 2007.
[24] A. Braun, N. Shah, F. E. Huggins, K. E. Kelly, A. Sarofim, C. Jacobsen,
S. Wirick, H. Francis, J. Ilavsky, G. E. Thomas, and G. P. Huffman. X-ray
scattering and spectroscopy studies on diesel soot from oxygenated fuel
under various engine load conditions. Carbon, 43:2588–2599, 2005.
149
[25] P. Brimblecombe and C. M. Grossi. Aesthetic thresholds and blackening of
stone buildings. Sci. Total Environ., 349:175–189, 2005.
[26] M. Celnik, R. Patterson, M. Kraft, and W. Wagner. Cou-
pling a stochastic soot population balance to gas-phase chemistry
using operator splitting. Combust. Flame, 148(3):158–176, 2007.
doi:10.1016/j.combustflame.2006.10.007.
[27] B. B. Chakraborty and R. Long. Gas chromotagraphic analysis of poly-
cyclic aromatic hydrocarbons in soot samples. Environ. Sci. Tech., 1:828,
1967.
[28] D. R. Chen, D. Y. H. Pui, D. Hummes, H. Fissan, F. R. Quant, and G. J.
Sem. Design and evaluation of a nanometer aerosol differential mobility
analyzer (nano-DMA). J. Aerosol Sci., 29(5-6):497–509, 1998.
[29] Computational Modelling Group. sweep. URL http://como.cheng.
cam.ac.uk/index.php?Page=Resources\&Section=
Software.
[30] A. D’Anna and A. Violi. Detailed modeling of the molecular growth pro-
cess in aromatic and aliphatic premixed flames. Energ. Fuel., 19:79–86,
2005. doi:10.1021/ef0499675.
[31] A. D’Anna, M. Commodo, S. Violi, C. Allouis, and J. Kent. Nano organic
carbon and soot in turbulent non-premixed ethylene flames. Proc. Combust.
Inst., 31(1):621–629, 2007.
[32] S. de Iuliis, F. Cignoli, and G. Zizak. Two-color laser-induced incandes-
cence (2C-LII) technique for absolute soot volume fraction measurements
in flames. Appl. Optics, 44(34):7414–7423, 2005.
[33] S. de Iuliis, F. Migliorini, F. Cignoli, and G. Zizak. 2d soot volume frac-
tion imaging in an ethylene diffusion flame by two-color laser-induced in-
candescence (2C-LII) technique and comparison with results from other
optical diagnostics. Proc. Combust. Inst., 31(1):869–876, 2007.
150
[34] M. Deaconu, N. Fournier, and E. Tanre. A pure jump markov process
associated with Smoluchowski’s coagulation equation. Ann. Prob., 30(4):
1763–1796, 2002.
[35] M. Deaconu, N. Fournier, and E. Tanre. Rate of Convergence of a Stochas-
tic Particle System for the Smoluchowski Coagulation Equation. Meth.
Comput. App. Prob., 5(2):131–158, 2003.
[36] E. Debry, B. Sportisse, and B. Jourdain. A stochastic approach for the
numerical simulation of the general dynamics equation for aerosols. J.
Comput. Phys., 184:649–669, 2003.
[37] S. di Stasio. Feasibility of an optical experimental method for the sizing of
primary spherules in sub-micron agglomerates by polarized light scattering.
Appl. Phys. B, 70:635–643, 2000.
[38] S. di Stasio. Electron microscopy evidence of aggregation under three dif-
ferent size scales for soot nanoparticles in flame. Carbon, 39:109–118,
2001.
[39] S. di Stasio, A. G. Konstandopoulos, and M. Kostoglou. Clustercluster
aggregation kinetics and primary particle growth of soot nanoparticles in
flame by light scattering and numerical simulations. J. Colloid Interf. Sci.,
247:33–46, 2002. doi:10.1006/jcis.2001.8095.
[40] S. di Stasio, J. B. A. Mitchell, J. L. LeGarrec, L. Biennier, and M. Wulff.
Synchrotron SAXS in situ identification of three different size modes for
soot nanoparticles in a diffusion flame. Carbon, 44:1267–1279, 2006.
doi:10.1016/j.carbon.2005.10.042.
[41] R. B. Diemer and J. H. Olson. A moment methodology for coagulation and
breakage problems: Part 2–moment models and distribution reconstruction.
Chem. Eng. Sci., 57:2211–2228, 2002.
151
[42] C. A. Dorao and H. A. Jakobsen. A least squares method for the solution
of population balance problems. Comput. Chem. Eng., 30:535–547, 2006.
doi:10.1016/j.compchemeng.2005.10.012.
[43] C. A. Dorao and H. A. Jakobsen. The quadrature method of moments and
its relationship with the method of weighted residuals. Chem. Eng. Sci., 61:
7795–7804, 2006. doi:10.1016/j.ces.2006.09.014.
[44] C. A. Dorao and H. A. Jakobsen. Timespace-property least squares spectral
method for population balance problems. Chem. Eng. Sci., 62:1323–1333,
2007. doi:10.1016/j.ces.2006.11.016.
[45] C. A. Dorao and H. A. Jakobsen. hp-adaptive least squares spectral ele-
ment method for population balance equations. Appl. Numer. Math., 2007.
doi:10.1016/j.apnum.2006.12.005. in press.
[46] K. Drumma, D. I. Attiab, S. Kannta, P. Mickea, R. Buhla, and K. Kienasta.
Soot-exposed mononuclear cells increase inflammatory cytokine mRNA
expression and protein secretion in cocultured bronchial epithelial cells.
Respiration, 67:291–297, 2000.
[47] Y. Efendiev and M. R. Zachariah. Hierarchical hybrid Monte-Carlo method
for simulation of two-component aerosol nucleation, coagulation and phase
segregation. J. Aerosol Sci., 34:169–188, 2003.
[48] A. Eibeck and W. Wagner. Stochastic particle approximation for Smolu-
chowski’s coagulation equation. Ann. Appl. Probab., 11(4):1137–1165,
2001.
[49] A. Eibeck and W. Wagner. An efficient stochastic algorithm for studying
coagulation dynamics and gelation phenomena. SIAM J. Sci. Comput., 22:
802–821, 2000.
[50] A. Eibeck and W. Wagner. Approximative solution of the coagulation-
fragmentation equation by stochastic particle systems. Stoch. Anal. Appl.,
18(6):921–948, 2000.
152
[51] A. Eibeck and W. Wagner. Stochastic interacting particle systems and non-
linear kinetic equations. Ann. Appl. Probab., 13(3):845–889, 2003.
[52] A. M. El-Leathy, C. H. Kim, G. M. Faeth, and F. Xu. Soot surface reactions
in high-temperature laminar diffusion flames. AIAA Journal, 42:988–996,
2004.
[53] W. D. Erickson, G. C. Williams, and H. C. Hottel. Light scattering mea-
surements on soot in a benzene-air flame. Combust. Flame, 8(2):89–170,
1964.
[54] P. S. Fennel, J. S. Dennis, and A. N. Hayhurst. The size distributions of
nanoparticles of the oxides of Mg, Ba and Al in flames: Their measurement
and dependence on the concentrations of free radicals in the flame. Proc.
Combust. Inst., 31(2):1939–1945, 2007. doi:10.1016/j.proci.2006.07.137.
[55] K. A. Fichthorn and W. H. Weinberg. Theoretical foundations of dynamical
Monte Carlo simulations. J. Chem. Phys., 95(2):1090–1096, 1991.
[56] F. Findik, R. Yilmaz, and T. Koksal. Investigation of mechanical and phys-
ical properties of several industrial rubbers. Materials and Design, 25:
269–276, 2004.
[57] M. Frenklach. Method of moments with interpolative closure. Chem. Eng.
Sci., 57:2229–2239, 2002.
[58] M. Frenklach. Reaction mechanism of soot formation in flames. Phys.
Chem. Chem. Phys., 4:2028–2037, 2002. doi:10.1039/b110045a.
[59] M. Frenklach and S. J. Harris. Aerosol dynamics modeling using the
method of moments. J. Colloid Inter. Sci., 118(1):252–261, 1987.
[60] M. Frenklach and H. Wang. Soot Formation in Combustion: Mechanisms
and Models, pages 165–192. Springer Verlag, 1994.
153
[61] M. Frenklach, C. A. Schuetz, and J. Ping. Migration mechanism
of aromatic-edge growth. Proc. Combust. Inst., 30:1389–1396, 2005.
doi:10.1016/j.proci.2004.07.048.
[62] N. A. Fuchs. The Mechanics of Aerosols. Pergammon Press, Oxford, 1964.
Original edition in Russian published in 1955.
[63] C. Gardner, G. N. Greaves, G. K. Hargrave, S. Jarvis, P. Wildman, F. Me-
neau, W. Bras, and G. Thomas. In situ measurements of soot formation in
simple flames using small angle X-ray scattering. Nucl. Instrum. Meth. B,
238:334–339, 2005. doi:10.1016/j.nimb.2005.06.072.
[64] F. Gelbard and J. H. Seinfeld. Numerical solution of the dynamic equa-
tion for particulate systems. J. Comput. Phys., 28(3):357–375, 1978.
doi:10.1016/0021-9991(78)90058-X.
[65] F. Gelbard and J. H. Seinfeld. Simulation of multicomponent aerosol dy-
namics. J. Coll. Interf. Sci., 78(2):485–501, 1980.
[66] D. T. Gillespie. Approximate accelerated stochastic simulation of chemi-
cally reacting systems. J. Chem. Phys., 114(4):1716–1733, 2001.
[67] D. T. Gillespie. An exact method for numerically simulating the stochastic
coalescence process in a cloud. J. Atmos. Sci., 32:1977–1989, 1975.
[68] M. Goodson and M. Kraft. An efficient stochastic algorithm for simulating
nano-particle dynamics. J. Comput. Phys., 183:210–232, 2002.
[69] A. R. Gourlay. Splitting methods for time dependent partial differential
equations. In D. Jacobs, editor, The State of the Art in Numerical Analysis,
pages 757–796, London, 1977. Academic Press.
[70] S. C. Graham, J. B. Homer, and J. L. J. Rosenfeld. The formation and
coagulation of soot aerosols generated by the pyrolysis of aromatic hydro-
carbons. Proc. Royal Soc., 344:259–285, 1975.
154
[71] R. Grosch, H. Briesen, W. Marquardt, and M. Wulkow. Generalization
and numerical investigation of QMOM. AIChE J., 53(1):207–227, 2007.
doi:10.1002/aic.11041.
[72] H.-H. Grotheer, H. Pokorny, K.-L. Barth, M. Thierley, and M. Aigner. Mass
spectrometry up to 1 million mass units for the simultaneous detection of
primary soot and of soot precursors (nanoparticles) in flames. Chemo-
sphere, 57:1335–1342, 2004.
[73] F. Guias. A stochastic numerical method for diffusion equations and appli-
cations to spatially inhomogeneous coagulation processes. In H. Niederre-
iter and D. Talay, editors, Monte Carlo and Quasi-Monte Carlo Methods
2004, pages 147–161. Springer, 2006. doi:10.1007/3-540-31186-6 10.
[74] F. Guias. A Monte Carlo approach to the Smoluchowski equations. MCMA,
3(4):313–326, 1997.
[75] R. Gunawan, I. Fusman, and R. D. Braatz. High resolution algorithms for
multidimensional population balance equations. AIChE J., 50:2738–2749,
2004.
[76] Z. Haibo, Z. Chuguang, and X. Minghou. Multi-Monte Carlo ap-
proach for general dynamic equation considering simultaneous parti-
cle coagulation and breakage. Powder Tech., 154:164–178, 2005.
doi:10.1016/j.powtec.2005.04.042.
[77] R. A. Hamilton, J. S. Curtis, and D. Ramkrishna. Beyond log-normal dis-
tributions: Hermite spectra for solving population balances. AIChE J., 49:
2328–2343, 2003.
[78] W. N. Hartley and H. Ramage. The mineral constituents of dust and soot
from various sources. Proc. Royal Soc., 68:97–109, 1901.
[79] J. P. Hessler, S. Seifert, R. E. Winans, and T. H. Fletcher. Small-angle x-
ray studies of soot inception and growth. Faraday Discuss., 119:395–407,
2001. doi:10.1039/b102822g.
155
[80] M. J. Hounslow, R. L. Ryall, and V. R. Marshall. A discretized population
balance for nucleation, growth, and aggregation. AIChE Journal, 34(11):
1821–1832, 1988.
[81] K. Hoyermann, F. Mauß, and T. Zeuch. A detailed chemical reaction
mechanism for the oxidation of hydrocarbons and its application to the
analysis of benzene formation in fuel-rich premixed laminar acetylene
and propene flames. Phys. Chem. Chem. Phys., 6:3824–3835, 2004.
doi:10.1039/b404632c.
[82] Q. Hu, S. Rohani, and A. Jutan. New numerical method for solving the
dynamic population balance equations. AIChE J., 51(11):3000–3006, 2005.
[83] H. M. Hulburt and S. Katz. Some problems in particle technology: A
statistical mechanical formulation. Chem. Eng. Sci., 19:555–574, 1964.
[84] C. D. Immanuel and F. J. Doyle III. Computationally efficient solution of
population balance models incorporating nucleation, growth and coagula-
tion: application to emulsion polymerization. Chem. Eng. Sci., 58:3681–
3698, 2003. doi:10.1016/S0009-2509(03)00216-1.
[85] C. D. Immanuel and F. J. Doyle III. Solution technique for a multi-
dimensional population balance model describing granulation processes.
Powder Tech., 156:213–225, 2005. doi:10.1016/j.powtec.2005.04.013.
[86] S. Izvekov and A. Violi. A coarse-grained molecular dynamics study of
carbon nanoparticle aggregation. J. Chem. Theory Comput., 2:504–512,
2005. doi:10.1021/ct060030d.
[87] S. Izvekov, A. Violi, and G. A. Voth. Systematic coarse-graining of
nanoparticle interactions in molecular dynamics simulation. J. Phys. Chem.
B, 109:17019–17024, 2005. doi:10.1021/jp0530496.
[88] M. Z. Jacobson and J. H. Seinfeld. Evolution of nanoparticle size and
mixing state near the point of emission. Atmospheric Environment, 38:
1839–1850, 2004.
156
[89] J. I. Jeong and M. Choi. A sectional method for the analysis of growth
of polydisperse non-spherical particles undergoing coagulation and co-
alescence. J. Aerosol Sci., 32(5):565–582, 2001. doi:10.1016/S0021-
8502(00)00103-8.
[90] O. Kallenberg. Foundations of modern probability. Springer, New York, 2
edition, 2001.
[91] K. C. Kannenberg and I. D. Boyd. Strategies for efficient particle resolution
in the direct simulation Monte Carlo method. J. Comput. Phys., 157:727–
745, 2000. doi:10.1006/jcph.1999.6397.
[92] A. Kazakov and M. Frenklach. Dynamic modeling of soot particle coagu-
lation and aggregation: Implementation with the method of moments and
application to high-pressure laminar premixed flames. Combust. Flame,
114:484–501, 1998.
[93] A. Kazakov, H. Wang, and M. Frenklach. Detailed modeling of soot forma-
tion in laminar premixed ethylene flames at a pressure of 10 bar. Combust.
Flame, 100:111–120, 1995.
[94] T. R. Kiehl, R. M. Mattheyses, and M. K. Simmons. Hybrid simulation of
cellular behaviour. Bioinformatics, 20(3):316–322, 2004.
[95] C. H. Kim, A. M. El-Leathy, F. Xu, and G. M. Faeth. Soot surface growth
and oxidation in laminar diffusion flames at pressures of 0.1-1.0 atm. Com-
bust. Flame, 136:191–207, 2004.
[96] A. A. Koelmans, M. T. O. Jonker, G. Cornelissen, T. D. Bucheli, P. C.
M. V. Noort, and O. Gustafsson. Black carbon: The reverse of its dark side.
Chemosphere, 63:365–377, 2006.
[97] A. Kolodko and K. Sabelfeld. Stochastic particle methods for Smolu-
chowski coagulation equation: variance reduction and error estimations.
MCMA, 9(4):315–339, 2003.
157
[98] B. Koo, T. M. Gaydos, and S. N. Pandis. Evaluation of the equilibrium,
dynamic, and hybrid aerosol modeling approaches. Aerosol Sci. Tech., 37
(1):53–64, 2003. doi:10.1080/02786820300893.
[99] M. Kostoglou and A. G. Konstandopoulos. Evolution of aggregate size and
fractal dimension during Brownian coagulation. J. Aerosol Sci, 32:1399–
1420, 2001.
[100] M. Kostoglou, A. G. Konstandopoulos, and S. K. Friedlander. Bi-
variate population dynamics simulation of fractal aerosol aggregate co-
agulation and restructuring. J. Aerosol Sci., 37:1102–1115, 2006.
doi:10.1016/j.jaerosci.2005.11.009.
[101] U. O. Koylu, G. M. Faeth, T. L. Farias, and M. G. Carvalho. Fractal and
projected structure properties of soot aggregates. Combust. Flame, 100:
621–633, 1995.
[102] U. O. Koylu, C. S. McEnally, D. E. Rosner, and L. D. Pfefferle. Simulta-
neous measurements of soot volume fraction and particle size / microstruc-
ture using a thermophoretic sampling technique. Combust. Flame, 109:
488–500, 1996.
[103] A. Kubota, C. J. Mundy, W. J. Pitz, C. Melius, C. K. Westbrook, and M.-J.
Caturla. Massively parallel combined Monte Carlo and molecular dynam-
ics methods to study the long-time-scale evolution of particulate matter and
molecular structures under reactive flow conditions. In Proceedings of the
Third Joint Meeting of the U.S. Sections of The Combustion Institute, 2004.
[104] S. Kumar and D. Ramkrishna. On the solution of population balance equa-
tions by discretization—I. A fixed pivot technique. Chem. Eng. Sci., 51(8):
1311–1332, 1996.
[105] S. Kumar and D. Ramkrishna. On the solution of population balance equa-
tions by discretization—III. Nucleation, growth and aggregation of parti-
cles. Chem. Eng. Sci., 52(24):4659–4679, 1996.
158
[106] M. Lapuerta, R. Ballesteros, and F. J. Martos. A method to determine the
fractal dimension of diesel soot agglomerates. J. Colloid Interf. Sci., 303:
149–158, 2006. doi:10.1016/j.jcis.2006.07.066.
[107] J. C. Lee, H. N. Najm, S. Lefantzi, J. Ray, M. Frenklach, M. Val-
orani, and D. A. Goussis. A CSP and tabulation-based adap-
tive chemistry model. Combust. Theor. Model., 11(1):73–102, 2007.
doi:10.1080/13647830600763595.
[108] Z. Li and H. Wang. Drag force, diffusion coefficient, and electric mobility
of small particles. I. Theory applicable to the free-molecule regime. Phys.
Rev. E, 68:061206, 2003.
[109] Z. Li and H. Wang. Drag force, diffusion coefficient, and electric mobility
of small particles. II. Application. Phys. Rev. E, 68:061207, 2003.
[110] K. Liffman. A direct simulation Monte-Carlo method for cluster coagula-
tion. J. Comput. Phys., 100(1):116–127, 1992.
[111] G. Ma, J. Z. Wen, M. F. Lightstone, and M. J. Thomson. Optimization of
soot modeling in turbulent nonpremixed ethylene/air jet flames. Combust.
Sci. Tech., 177:1567–1602, 2005. doi:10.1080/00102200590956786.
[112] U. Maas and J. Warnatz. Ignition processes in hydrogen-oxygen mixtures.
Combust. Flame, 74:53–69, 1988.
[113] A. W. Mahoney and D. Ramkrishna. Effcient solution of population bal-
ance equations with discontinuities by finite elements. Chem. Eng. Sci., 57:
1107–1119, 2002.
[114] A. Maisels, F. E. Kruis, and H. Fissan. Direct simulation Monte Carlo
for simultaneous nucleation, coagulation and surface growth in dispersed
systems. Chem. Eng. Sci, 59:2231–2239, 2004.
159
[115] U. K. Mandal and S. Aggarwal. Studies of rubber-filler interaction in
carboxylated nitrile rubber through microhardness measurement. Polymer
Testing, 20:305–311, 2001.
[116] N. V. Mantzaris, P. Daoutidis, and F. Srienc. Numerical solution of multi-
variable cell population balance models. II. Spectral methods. Comput.
Chem. Eng., 25:1441–1462, 2001. doi:10.1016/S0098-1354(01)00710-4.
[117] N. V. Mantzaris, P. Daoutidis, and F. Srienc. Numerical solution of
multi-variable cell population balance models. III. Finite element meth-
ods. Comput. Chem. Eng., 25:1463–1481, 2001. doi:10.1016/S0098-
1354(01)00711-6.
[118] S. L. Manzello and M. Y. Choi. Morphology of soot collected in micro-
gravity droplet flames. Int. J. Heat Mass Tran., 45(5):1109–1116, 2002.
doi:10.1016/S0017-9310(01)00164-8.
[119] D. L. Marchisio and R. O. Fox. Solution of population balance equations
using the direct quadrature method of moments. J. Aerosol Sci., 36:43–73,
2005. doi:10.1016/j.jaerosci.2004.07.009.
[120] M. M. Maricq. Coagulation dynamics of fractal-like soot aggregates. J.
Aerosol Sci, 38:141–156, 2007.
[121] M. M. Maricq and N. Xu. The effective density and fractal dimension of
soot particles from premixed flames and motor vehicle exhaust. J. Aerosol
Sci, 35:1251–1274, 2004.
[122] M. M. Maricq, S. J. Harris, and J. J. Szente. Soot size distributions in rich
premixed ethylene flames. Combust. Flame, 132:328–342, 2003.
[123] T. Matsoukas and Y. Lin. Fokker-planck equation for particle
growth by monomer attachment. Phys. Rev. E., 74:031122, 2006.
doi:10.1103/PhysRevE.74.031122.
160
[124] J. L. Mauderly. Diesel emissions: Is more health research still needed?
Toxicol. Sci., 62(1):6–9, 2001.
[125] C. S. McEnally, L. D. Pfefferle, B. Atakan, and K. Kohse-Hoinghaus. Stud-
ies of aromatic hydrocarbon formation mechanisms in flames: Progress to-
wards closing the fuel gap. Prog. Energ. Combust., 32:247–294, 2006.
[126] R. McGraw and D. L. Wright. Chemically resolved aerosol dynamics for
internal mixtures by the quadrature method of moments. J. Aerosol Sci.,
34:189–209, 2003.
[127] P. Meakin. Effects of cluster trajectories on cluster—cluster aggrega-
tion: A comparison of linear and Brownian trajectories in two- and three-
dimensional simulations. Phys. Rev. A, 29(2):997–999, 1984.
[128] P. Meakin and R. Jullien. The effects of restructuring on the geome-
try of clusters formed by diffusion-limited, ballistic, and reaction-limited
cluster—cluster aggregation. J. Chem. Phys., 89(1):246–250, 1988.
[129] L. A. Melton. Soot diagnostics based on laser heating. Appl. Optics, 23
(13):2201–2208, 1984.
[130] J. A. Miller, M. J. Pilling, and J. Troe. Unravelling combustion mecha-
nisms through a quantitative understanding of elementary reactions. Proc.
Combust. Inst., 30(1):43–88, 2005. doi:10.1016/j.proci.2004.08.281.
[131] S. E. Miller and S. C. Garrick. Nanoparticle coagulation in a planar jet.
Aerosol Sci. Tech., 38:79–89, 2004. doi:10.1080/02786820490247669.
[132] J. B. A. Mitchell, J. L. L. Garrec, A. I. Florescu-Mitchell, and
S. di Stasio. Small-angle neutron scattering study of soot particles in
an ethylene–air diffusion flame. Combust. Flame, 145:80–87, 2006.
doi:10.1016/j.combustflame.2005.12.003.
[133] P. Mitchell and M. Frenklach. Particle aggregation with simultaneous sur-
face growth. Phys. Rev. E, 67:061407, 2003.
161
[134] P. Mitchell and M. Frenklach. Monte Carlo simulation of soot aggregation
with simultaneous surface growth - why primary particles appear spheri-
cal. In 27th Symposium (International) on Combustion, pages 1507–1514.
Combustion Institute, 1998.
[135] P. A. Mitchell. Monte Carlo Simulation of Soot Aggregation with Simul-
taneous Surface Growth. PhD thesis, University of California, Berkeley,
2001.
[136] E. G. Moody and L. R. Collins. Effect of mixing on the nucleation
and growth of titania particles. Aerosol Sci. Tech., 37:403–424, 2003.
doi:10.1080/02786820390125179.
[137] G. E. Moore. Cramming more components onto integrated circuits. Elec-
tronics, 38, 1965. URL ftp://download.intel.com/museum/
Moores_Law/Articles-Press_Releases/Gordon_Moore_
1965_Article.pdf.
[138] N. Morgan, C. Wells, M. Kraft, and W. Wagner. Modelling nanoparticle
dynamics: coagulation, sintering, particle inception and surface growth.
Combust. Theor. Model., 9(3):449–461, 2005.
[139] N. Morgan, C. Wells, M. Goodson, M. Kraft, and W. Wagner. A new
numerical approach for the simulation of the growth of inorganic nanopar-
ticles. J. Comput. Phys., 211(2):638–658, 2006.
[140] N. Morgan, M. Kraft, M. Balthasar, D. Wong, M. Frenklach, and
P. Mitchell. Numerical nimulations of soot aggregation in premixed lami-
nar flames. Proc. Combust. Inst., 31(1):693–700, 2007.
[141] N. M. Morgan, R. I. A. Patterson, A. Raj, and M. Kraft. Modes of neck
growth in nanoparticle aggregates. Technical Report 45, c4e Preprint-
Series, Cambridge, 2007.
162
[142] H. Muhlenweg, A. Gutscha, A. Schilda, and S. E. Pratsinis. Process simu-
lation of gas-to-particle-synthesis via population balances: Investigation of
three models. Chem. Eng. Sci., 57:2305–2322, 2002.
[143] H. Muhr, R. David, J. Villermaux, and P. H. Jezequel. Crystallization and
precipitation engineering-VI. Solving population balance in the case of the
precipitation of silver bromide crystals with high primary nucleation rates
by using the first order upwind differentiation. Chem. Eng. Sci., 51(2):
309–319, 1996. doi:10.1016/0009-2509(95)00257-X.
[144] I. Naydenova, M. Nullmeier, J. Warnatz, and P. A. Vlasov. Detailed kinetic
modelling of soot formation during shock-tube pyrolysis of C6H6: Direct
comparison with the results of time-resolved laser-induced incandescence
(LII) and cw-laser extinction measurements. Combust. Sci. Tech., 176(10):
1667–1703, 2004. doi:10.1080/00102200490487544.
[145] M. Nicmanis and M. J. Hounslow. A finite element analysis of the steady
state population balance equation for particulate systems: Aggregation and
growth. Comput. Chem. Engng., 29(Suppl.):S261–S266, 1996.
[146] M. Nicmanis and M. J. Hounslow. Finite-element methods for steady-state
population balance equations. AIChE J., 44:2258–2272, 1998.
[147] J. R. Norris. Smouchowski’s coagulation equation: uniqueness, nonunique-
ness and a hydrodynamic limit for the stochastic coalescent. Ann. Appl.
Probab., 9(1):78–109, 1999.
[148] C. Oh and C. M. Sorensen. The effect of overlap between monomers on
the determination of fractal cluster morphology. J. Colloid Interf. Sci., 193:
17–25, 1997.
[149] B. Oktem, M. P. Tolocka, B. Zhao, H. Wang, and M. V. Johnston. Chemical
species associated with the early stage of soot growth in a laminar premixed
ethylene-oxygen-argon flame. Combust. Flame, 142:364–373, 2005.
163
[150] E. S. Oran, C. K. Oh, and B. Z. Cybyk. Direct simulation Monte Carlo:
Recent advances and applications. Annu. Rev. Fluid Mech., 30:403–441,
1998.
[151] F. Ossler and J. Larsson. Exploring the formation of carbon-
based molecules, clusters and particles by in situ detection of scat-
tered x-ray radiation. Chem. Phys. Lett., 387:367–371, 2004.
doi:10.1016/j.cplett.2004.02.046.
[152] F. Ossler and J. Larsson. Measurements of the structures of nanoparticles
in flames by in situ detection of scattered x-ray radiation. J. Appl. Phys.,
98:114317, 2005. doi:10.1063/1.2140080.
[153] E. Otto, H. Fissan, S. H. Park, and K. W. Lee. The log-normal size distri-
bution theory of Brownian aerosol coagulation for the entire particle size
range: Part II Analytical solution using Dahneke’s coagulation kernel. J.
Aerosol Sci., 30(1):17–34, 1999.
[154] S. H. Park and S. N. Rogak. A one-dimensional model for coagulation,
sintering, and surface growth of aerosol agglomerates. Aerosol Sci. Tech.,
37(12):947–960, 2003.
[155] S. H. Park and S. N. Rogak. A novel fixed-sectional model for the formation
and growth of aerosol agglomerates. J. Aerosol Sci., 35(11):1385–1404,
2004.
[156] S. H. Park, S. N. Rogak, W. K. Bushe, J. Z. Wen, and M. J. Thomson.
An aerosol model to predict size and structure of soot particles. Combust.
Theory Model., 9(3):499–513, 2005.
[157] R. I. A. Patterson, J. Singh, M. Balthasar, M. Kraft, and J. R. Norris. The
linear process deferment algorithm: A new technique for solving popula-
tion balance equations. SIAM J. Sci. Comput., 145(1):303–320, 2006.
164
[158] C. Pilinis. Derivation and numerical solution of the species mass distribu-
tion equations for multicomponent particulate systems. Atmos. Environ.,
24(7):1923–1928, 1990.
[159] S. E. Pratsinis. Simultaneous nucleation, condensation, and coagulation in
aerosol reactors. J. Colloid Interf. Sci., 124(2):416–427, 1988.
[160] J. Pyykonen and J. Jokiniemi. Computational fluid dynamics based sec-
tional aerosol modelling schemes. J. Aerosol Sci., 31(5):531–550, 200.
[161] B. Quay, T. W. Lee, T. Ni, and R. J. Santoro. Spatially resolved measure-
ments of soot volume fraction using laser-induced incandescence. Com-
bust. Flame, 97(3-4):384–392, 1994.
[162] R Development Core Team. R: A language and environment for statistical
computing. R Foundation for Statistical Computing, Vienna, Austria, 2005.
URL http://www.R-project.org. ISBN 3-900051-07-0.
[163] K. Radhakrishnana and A. C. Hindmarsh. Description and use of LSODE,
the Livermore solver for ordinary differential equations. Technical Report
UCRL-ID-113855, LLNL Report, 1994.
[164] D. Ramkrishna. Population Balances. Academic Press, 2000.
[165] Z. Ren and S. B. Pope. The geometry of reaction trajectories and attracting
manifolds in composition space. Combust. Theor. Model., 10(3):361–388,
2006.
[166] K. L. Revzan, N. J. Brown, and M. Frenklach.
http://www.me.berkeley.edu/soot/.
[167] H. Richter, S. Granata, W. H. Green, and J. B. Howard. Detailed
modeling of PAH and soot formation in a laminar premixed ben-
zene/oxygen/argon low-pressure flame. Proc. Combust. Inst., 30:1397–
1405, 2005. doi:10.1016/j.proci.2004.08.088.
165
[168] N. Rieflera, S. di Stasio, and T. Wriedt. Structural analysis of clusters using
configurational and orientational averaging in light scattering analysis. J.
Quant. Spectrosc. Ra., 89:323–342, 2004.
[169] S. Rjasanow and W. Wagner. A stochastic weighted particle method for the
Boltzmann equation. J. Comput. Phys., 124:243–253, 1996.
[170] S. Rjasanow and W. Wagner. Simulation of rare events by the stochastic
weighted particle method for the Boltzmann equation. Math. and Comput.
Modelling, 33:926–907, 2001.
[171] D. E. Rosner and S. Yu. MC simulation of aerosol aggregation and simula-
taneous spheroidization. AIChE J., 47(3):545–561, 2001.
[172] D. H. Rosner and J. J. Pyykonen. Bivariate moment simulation of coag-
ulating and sintering nanoparticles in flames. AIChE J., 48(3):476–491,
2002.
[173] A. I. Roussos, A. H. Alexopoulos, and C. Kiparissides. Part III: Dynamic
evolution of the particle size distribution in batch and continuous particulate
processes: A Galerkin on finite elements approach. Chem. Eng. Sci., 60:
6998–7010, 2005.
[174] A. Sandu and C. Borden. A framework for the numerical treat-
ment of aerosol dynamics. Appl. Numer. Math., 45:475–497, 2003.
doi:10.1016/S0168-9274(02)00251-9.
[175] D. A. Schwer, P. Lu, and W. H. G. Jr. An adaptive chemistry approach to
modeling complex kinetics in reacting flows. Combust. Flame, 133:451–
465, 2003. doi:10.1016/S0010-2180(03)00045-2.
[176] K. Siegmann and K. Sattler. Formation mechanism for polycyclic aromatic
hydrocarbons in methane flames. J. Chem. Phys., 112(2):698–709, 2000.
166
[177] J. Singh, R. Patterson, M. Balthasar, M. Kraft, and W. Wagner. Modelling
soot particle size distribution: Dynamics of pressure regimes. Technical
Report 25, c4e Preprint-Series, Cambridge, 2004.
[178] J. Singh, M. Balthasar, M. Kraft, and W. Wagner. Stochastic modelling of
soot particle size and age distribution in laminar premixed flames. Proc.
Combust. Inst., 30:1457–1465, 2005.
[179] J. Singh, R. I. A. Patterson, M. Kraft, and H. Wang. Numerical simulation
and sensitivity analysis of detailed soot particle size distribution in laminar
premixed ethylene flames. Combust. Flame, 145:117–127, 2006.
[180] J. Slowik, K. Stainken, P. Davidovits, L. Williams, J. Jayne, C. Kolb,
D. Worsnop, Y. Rudich, P. DeCarlo, and J. Jimenez. Particle morphol-
ogy and density characterization by combined mobility and aerodynamic
diameter measurements. part 2: Application to combustion-generated soot
aerosols as a function of fuel equivalence ratio. Aerosol Sci. Tech., 38(12):
1206–1222, 2004. doi:10.1080/027868290903916.
[181] M. Smith and T. Matsoukas. Constant-number Monte Carlo simulation of
population balances. Chem. Eng. Sci., 53(9):1777–1786, 1998.
[182] M. D. Smooke, C. S. McEnally, L. Pfefferle, R. J. Hall, and M. B. Colket.
Computational and experimental study of soot formation in a coflow, lami-
nar diffusion flame. Combust. Flame, 117:117–139, 1999.
[183] M. D. Smooke, R. J. H. M. B. Colket, J. Fielding, M. B. Long, C. S. McE-
nally, and L. D. Pfefferle. Investigation of the transition from lightly soot-
ing towards heavily sooting co-flow ethylene diffusion flames. Combust.
Theor. Model., 8(3):593–606, 2004. doi:10.1088/1364-7830/8/3/009.
[184] M. D. Smooke, B. C. C. M B Long, M. B. Colket, and R. J. Hall. Soot
formation in laminar diffusion flames. Combust. Flame, 143(4):613–628,
2005. doi:10.1016/j.combustflame.2005.08.028.
167
[185] C. M. Sorensen. Light scattering by fractal aggregates: A review. Aerosol
Sci. Tech., 35:648–687, 2001.
[186] C. M. Sorensen and G. D. Feke. Post flame soot. In D. P. Lund and E. A.
Angell, editors, Proceedings of the International Conference on Fire Re-
search and Engineering, pages 280–285, Boston, MA, 1995. Society of
Fire Protection Engineers.
[187] C. M. Sorensen, J. Cai, and N. Lu. Light-scattering measurements of
monomer size, monomers per aggregate, and fractal dimension for soot
aggregates in flames. Applied Optics, 31(30):6547–6557, 1992.
[188] L. A. Spielman and O. Levenspiel. A Monte Carlo treatment for reacting
and coalescing dispersed phase systems. Chem. Eng. Sci., 20:247–254,
1965.
[189] C. B. Stipe, B. S. Higgins, D. Lucas, C. P. Koshland, and R. F. Sawyer.
Inverted co-flow diffusion flame for producing soot. Rev. Sci. Instrum., 76:
023908, 2005. doi:10.1063/1.1851492.
[190] Z. Sun, R. L. Axelbaum, and R. W. Davis. A sectional model for inves-
tigating microcontamination in a rotating disk CVD reactor. Aerosol Sci.
Tech., 38:1161–1170, 2004. doi:10.1080/027868290896799.
[191] Z. Sun, R. L. Axelbaum, and J. I. Huertas. Monte Carlo sim-
ulation of multicomponent aerosols undergoing simultaneous coagu-
lation and condensation. Aerosol Sci. Tech., 38:963–971, 2004.
doi:10.1080/027868290513847.
[192] J. P. R. Symonds, K. S. J. Reavell, J. S. Olfert, B. W. Campbell,
and S. J. Swift. Diesel soot mass calculation in real-time with a dif-
ferential mobility spectrometer. J. Aerosol Sci., 38(1):52–68, 2007.
doi:10.1016/j.jaerosci.2006.10.001.
168
[193] D. A. Terry, R. McGraw, and R. H. Rangel. Method of moments solutions
for a laminar flow aerosol reactor model. Aerosol Sci. Tech., 34:353–362,
2001.
[194] K. Tian, K. A. Thomson, F. Liu, D. R. Snelling, G. J. Smallwood, and
D. Wang. Determination of the morphology of soot aggregates using the
relative optical density method for the analysis of TEM images. Combust.
Flame, 144:782–791, 2006. doi:10.1016/j.combustflame.2005.06.017.
[195] K. Tian, F. Liu, M. Yang, K. A. Thomson, D. R. Snelling, and G. J. Small-
wood. Numerical simulation aided relative optical density analysis of TEM
images for soot morphology determination. Proc. Combust. Inst., 31(1):
861–868, 2007.
[196] S. Tsantilis and S. E. Pratsinis. Soft- and hard-agglomerate aerosols made
at high temperatures. Langmuir, 20(14):5933–5939, 2004.
[197] S. Tsantilis, H. K. Kammler, and S. E. Pratsinis. Population balance mod-
eling of flame synthesis of titania nanoparticles. Chem. Eng. Sci., 57:2139–
2156, 2002.
[198] M. Vanni. Approximate population balance equations for aggregation-
breakage processes. J. Colloid Interf. Sci, 221(2):143–160, 2000.
doi:10.1006/jcis.1999.6571.
[199] W. N. Venables and B. D. Ripley. Modern Applied Statistics with S.
Springer, 4th edition, 2002. ISBN 0-387-95457-0.
[200] A. Vikhansky and M. Kraft. Conservative method for the re-
duction of the number of particles in the Monte Carlo simulation
method for kinetic equations. J. Comput. Phys., 203:371–378, 2005.
doi:10.1016/j.jcp.2004.09.007.
[201] A. Violi. Modeling of soot particle inception in aromatic and
aliphatic premixed flames. Combust. Flame, 139:279–287, 2004.
doi:10.1016/j.combustflame.2004.08.013.
169
[202] A. Violi and A. Venkatnathan. Combustion-generated nanoparticles pro-
duced in a benzene flame: A multiscale approach. J. Chem. Phys., 125:
054302, 2006. doi:10.1063/1.2234481.
[203] A. Violi, A. F. Sarofim, and G. A. Voth. Kinetic Monte-Carlo—molecular
dynamics approach to model soot inception. Combust. Sci. Tech., 176:991–
1005, 2004. doi:10.1080/00102200490428594.
[204] A. Violi, G. A. Voth, and A. F. Sarofim. The relative roles of acetylene and
aromatic precursors during soot particle inception. Proc. Combust. Inst.,
30(1):1343–1351, 2005. doi:10.1016/j.proci.2004.08.226.
[205] L. Wallcave. Gas chromatographic analysis of polycyclic aromatic hydro-
carbons in soot samples. Environ. Sci. Tech., 3(10):948, 1969.
[206] D. Wang, A. Violi, D. H. Kim, and J. A. Mulholland. Formation of Naph-
thalene, Indene, and Benzene from Cyclopentadiene pyrolysis: A DFT
study. J. Phys. Chem. A, 110:4719–4725, 2006. doi:10.1021/jp053628a.
[207] H. Wang and M. Frenklach. A detailed kinetic modeling study of aromatics
formation in laminar premixed acetylene and ethylene flames. Combust.
Flame, 110:173–221, 1997.
[208] H. Wang, B. Shao, B. Wyslouzil, and K. Streletzky. Small-angle neutron
scattering of soot formed in laminar premixed ethylene flames. Proc. Com-
bust. Inst., 29:2749–2757, 2002.
[209] L. Wang, M. F. Modest, D. C. Haworth, and S. R. Turns. Mod-
elling nongrey gas-phase and soot radiation in luminous turbulent non-
premixed jet flames. Combust. Theor. Model., 9(3):479–498, 2005.
doi:10.1080/13647830500194834.
[210] Y. H. Wei, L. X. Zhang, and W. Ke. Evaluation of corrosion protection
of carbon black filled fusion-bonded epoxy coatings on mild steel during
exposure to a quiescent 3% NaCl solution. Corrosion Science, 49:287–302,
2007.
170
[211] C. Wells, N. Morgan, M. Kraft, and W. Wagner. A new method for calculat-
ing the diameters of partially-sintered nanoparticles and its effect on simu-
lated particle properties. Chemical Engineering Sci, 61:158–166, 2006.
[212] C. Wells, N. Morgan, M. Kraft, and W. Wagner. A new method for cal-
culating the diameters of partially-sintered nanoparticles and its effect on
simulated particle properties. Chem. Eng. Sci., 61(1):158–166, 2006.
[213] C. G. Wells and M. Kraft. Direct simulations and mass flow algorithms to
solve a sintering-coagulation equation. Technical Report 10, c4e Preprint-
Series, Cambridge, 2003. Material from this preprint has been published as
[214].
[214] C. G. Wells and M. Kraft. Direct simulation and mass flow stochastic al-
gorithms to solve a sintering-coagulation equation. MCMA, 11:175–197,
2005. doi:10.1163/156939605777585980.
[215] J. Z. Wen, M. J. Thompson, S. H. Park, S. N. Rogak, and M. F. Lightstone.
Study of soot growth in a plug flow reactor using a moving sectional model.
Proc. Combust. Inst., 30:1477–1484, 2005.
[216] J. Z. Wen, M. J. Thomson, and M. F. Lightstone. Numerical study of car-
bonaceous nanoparticle formation behind shock waves. Combust. Theory
Model., 10(2):257–272, 2006.
[217] J. Z. Wen, M. J. Thomson, M. F. Lightstone, S. H. Park, and S. N. Rogak.
An improved moving sectional aerosol model of soot formation in a plug
flow reactor. Combust. Sci. Tech., 178(5):921–951, 2006.
[218] J. Z. Wen, M. J. Thomson, M. F. Lightstone, and S. N. Rogak. Detailed ki-
netic modeling of carbonaceous nanoparticle inception and surface growth
during the pyrolysis of C6H6 behind shock. Energ. Fuel., 20:547–559,
2006. doi:10.1021/ef050081q.
171
[219] R. Whitesides, A. C. Kollias, D. Domin, W. A. Lester, jr, and M. Frenklach.
Graphene layer growth: Collision of migrating five-member rings. Proc.
Combust. Inst., 31(1):539–546, 2007.
[220] M. Wilck and F. Stratmann. A 2-d multicomponent modal aerosol model
and its application to laminar flow reactors. J. Aerosol Sci., 28(6):959–972,
1997.
[221] J. A. Wojcik and A. G. Jones. Dynamics and stability of continuous
MSMPR agglomerative precipitation: numerical analysis of the dual parti-
cle coordinate model. Computers Chem. Engng., 22(4/5):535–545, 1998.
[222] X. Y. Woo, R. B. H. Tan, P. S. Chow, and R. D. Braatz. Simula-
tion of mixing effects in antisolvent crystallization using a coupled CFD-
PDF-PBE approach. Crystal Growth & Design, 6:1291–1303, 2006.
doi:10.1021/cg0503090.
[223] D. L. Wright, R. McGraw, and D. E. Rosner. Bivariate extension of the
quadrature method of moments for modeling simultaneous coagulation and
sintering of particle populations. J. Colloid Interf. Sci., 236:242–251, 2001.
doi:10.1006/jcis.2000.7409.
[224] M. K. Wu and S. K. Friedlander. Enhanced power law agglomerate growth
in the free molecular regime. J. Aerosol Sci., 24(3):273–282, 1993.
[225] M. Wulkow and U. N. A Gerstlauer. Modeling and simulation of crystal-
lization processes using parsival. Chem. Eng. Sci., 56:2575–2588, 2001.
doi:10.1016/S0009-2509(00)00432-2.
[226] E. J. W. Wynn. Simulating aggregation and reaction: New Hounslow DPB
and four-parameter summary. AIChE J., 50(3):578–588, 2004.
[227] Y. Xiong and S. Pratsinis. Formation of irregular particles by coagulation
and sintering: A two dimensional solution of the population balance equa-
tion. J. Aerosol Sci., 22(S1):S199–S202, 1991.
172
[228] Y. Xiong and S. E. Pratsinis. Formation of agglomerate particles by coagu-
lation and sintering—part I.: A two dimensional solution of the population
balance equation. J. Aerosol Sci., 24(3):283–300, 1993.
[229] Y. Xiong, M. K. Akhtar, and S. E. Pratsinis. Formation of agglomerate
particles by coagulation and sintering—part I.: A. J. Aerosol Sci., 24(3):
301–313, 1993.
[230] F. Xu, P. B. Sunderland, and G. M. Faeth. Soot formation in laminar pre-
mixed ethylene/air flames at atmospheric pressure. Combust. Flame, 108:
471–493, 1997.
[231] F. Xu, P. B. Sunderland, and G. M. Faeth. Soot formation in laminar pre-
mixed ethylene/air flames at atmospheric pressure. Combust. Flame, 108:
471–493, 1997.
[232] F. Xu, K. C. Lin, and G. M. Faeth. Soot formation in laminar premixed
methane/oxygen flames at atmospheric pressure. Combust. Flame, 115:
195–209, 1998.
[233] S. Yan, Y.-J. Jiang, N. D. Marsh, E. G. Eddings, A. F. Sarofim, and R. J.
Pugmire. Study of the evolution of soot from various fuels. Energ. Fuel.,
19:1804–1811, 2005.
[234] B. Yang and U. O. Koylu. Detailed soot field in a turbulent non-premixed
ethylene/air flame from laser scattering and extinction experiments. Com-
bust. Flame, 141(1-2):55–65, 2005.
[235] C. Yoon and R. McGraw. Representation of generally mixed multivariate
aerosols by the quadrature method of moments: I. Statistical foundation. J.
Aerosol Sci., 35:561–576, 2004. doi:10.1016/j.jaerosci.2003.11.003.
[236] C. Yoon and R. McGraw. Representation of generally mixed multivariate
aerosols by the quadrature method of moments: II. Aerosol dynamics. J.
Aerosol Sci., 35:577–598, 2004. doi:10.1016/j.jaerosci.2003.11.012.
173
[237] S. Yu, Y. Yoon, M. Muller-Roosen, and I. M. Kennedy. A two-dimensional
discrete-sectional model for metal aerosol dynamics in a flame. Aerosol
Sci. Tech., 28:185–196, 1998.
[238] B. Zhao, Z. Yang, M. V. Johnston, H. Wang, A. S. Wexler, M. Balthasar,
and M. Kraft. Measurement and numerical simulation of soot particle size
distribution functions in a laminar premixed ethylene-oxygen-argon flame.
Combust. Flame, 133:173–188, 2003.
[239] B. Zhao, Z. Yang, J. Wang, M. V. Johnston, and H. Wang. Analy-
sis of soot nanoparticles in a laminar premixed ethylene flame by scan-
ning mobility particle sizer. Aerosol Sci. Tech., 37(8):611–620, 2003.
doi:10.1080/02786820300908.
[240] B. Zhao, Z. Yang, Z. Li, M. V. Johnston, and H. Wang. Particle size dis-
tribution function of incipient soot in laminar premixed ethylene flames:
effect of flame temperature. Proc. Combust. Inst., 30:1441–1448, 2005.
[241] B. Zhao, K. Uchikawa, and H. Wang. A comparative study of nanoparticles
in premixed flames by scanning mobility particle sizer, small angle neutron
scattering, and transmission electron microscopy. Proc. Combust. Inst., 31
(1):851–860, 2007.
[242] H. Zhao, C. Zheng, and M. Xu. Multi-Monte Carlo approach for particle
coagulation: description and validation. Appl. Math. Comput., 167:1383–
1399, 2005. doi:10.1016/j.amc.2004.08.014. This paper is by the same
authors as [76]; the two published manuscripts just reverse the orders of
the parts of the authors’ names.
[243] H. Z. A. M. T. M. C. Zheng. Analysis of four Monte Carlo methods for the
solution of population balances in dispersed systems. Powder Tech., 173:
38–50, 2007. doi:10.1016/j.powtec.2006.12.010.
[244] A. Zucca, D. L. Marchisio, A. A. Barresi, and R. O. Fox. Implemen-
tation of the population balance equation in CFD codes for modelling
174
soot formation in turbulent flames. Chem. Eng. Sci., 61:87–95, 2006.
doi:10.1016/j.ces.2004.11.061.
[245] M. Zurita-Gotor and D. E. Rosner. Effective diameters for collisions of
fractal-like aggregates: Recommendations for improved aerosol coagula-
tion frequency predictions. J. Colloid Interf. Sci., 255:10–26, 2002.
[246] M. Zurita-Gotor and D. E. Rosner. Aggregate size distribution evolution for
brownian coagulation–sensitivity to an improved rate constant. J. Colloid
Interf. Sci., 274:502–514, 2004.
175