A W Jayawardena
Environmental and Hydrological Systems Modelling
A W Jayawardena
Environmental and Hydrological Systems Modelling
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Providing the tools students and professionals need, this book details different approaches to modelling the water environment over a range of spatial and temporal scales. Their applications are shown with a series of case studies, taken mainly from the Asia-Pacific Region. Topics include linear systems, conceptual models, data driven models, process-based models, risk-management models, model parameter estimation as well as model calibration, validation, and testing.
Andere Kunden interessierten sich auch für
- Reginald W HerschyStreamflow Measurement90,99 €
- Khalid Elnoor Ali HassaballahLand Degradation in the Dinder and Rahad Basins95,99 €
- Water Wells - Monitoring, Maintenance, Rehabilitation85,99 €
- Girma Yimer EbrahimModelling Groundwater Systems71,99 €
- Zeleke Agide DejenHydraulic and Operational Performance of Irrigation Schemes in View of Water Saving and Sustainability77,99 €
- Flood Hazard Management: British and International Perspectives85,99 €
- Yasis Abbas MohamedThe Nile Hydroclimatology: Impact of the Sudd Wetland27,99 €
-
-
-
Providing the tools students and professionals need, this book details different approaches to modelling the water environment over a range of spatial and temporal scales. Their applications are shown with a series of case studies, taken mainly from the Asia-Pacific Region. Topics include linear systems, conceptual models, data driven models, process-based models, risk-management models, model parameter estimation as well as model calibration, validation, and testing.
Produktdetails
- Produktdetails
- Verlag: CRC Press
- Seitenzahl: 536
- Erscheinungstermin: 21. Januar 2014
- Englisch
- Abmessung: 249mm x 174mm x 32mm
- Gewicht: 933g
- ISBN-13: 9780415465328
- ISBN-10: 041546532X
- Artikelnr.: 27269721
- Verlag: CRC Press
- Seitenzahl: 536
- Erscheinungstermin: 21. Januar 2014
- Englisch
- Abmessung: 249mm x 174mm x 32mm
- Gewicht: 933g
- ISBN-13: 9780415465328
- ISBN-10: 041546532X
- Artikelnr.: 27269721
A.W. Jayawardena obtained his undergraduate degree BSc (Eng) Hons from the University of Ceylon (now Sri Lanka) and postgraduate degrees MEng from the University of Tokyo, MS from the University of California at Berkeley, and PhD from the University of London. He is a Chartered Engineer, a Fellow of the UK Institution of Civil Engineers, a Fellow of the Hong Kong Institution of Engineers, and a Life Member of the American Society of Civil Engineers. His academic career includes many years of teaching in the Department of Civil Engineering of the University of Hong Kong; he was also a Research and Training Advisor to the International Centre for Water Hazard and Risk Management (ICHARM) under the auspices of UNESCO and hosted by the Public Works Research Institute of Japan, a Professor at the National Graduate Institute for Policy Studies, Japan, and an Honorary Professor in the Department of Statistics and Actuarial Sciences of the University of Hong Kong. He is currently an Adjunct Professor in the Department of Civil Engineering of the University of Hong Kong, Technical Advisor to the Research and Development Centre, Nippon Koei Co. Ltd. (Consulting Engineers), Japan, and a Guest Professor of Beijing Normal University, China. He has been a specialist consultant for UNESCO and for several engineering consulting companies in Hong Kong, including as an expert witness and provider of expert opinion in several legal cases in Hong Kong.
Preface
Author
1 Introduction
1.1 Some definitions
1.1.1 System
1.1.2 State of a system
1.2 General systems theory (GST)
1.3 Ecological systems (Ecosystems)
1.4 Equi-finality
1.5 Scope and layout
References
2 Historical development of hydrological modelling
2.1 Basic concepts and governing equation of linear systems
2.1.1 Time domain analysis
2.1.1.1 Types of input functions
2.1.1.2 System response function - convolution integral
2.1.2 Frequency domain analysis
2.1.2.1 Fourier transform - frequency response function (FRF)
2.1.2.2 Laplace transform
2.1.2.3 z-Transform
2.2 Linear systems in hydrological modelling
2.2.1 Hydrological systems
2.2.2 Unit hydrograph
2.2.2.1 Unit hydrograph for a complex storm
2.2.2.2 Instantaneous unit hydrograph (IUH)
2.2.2.3 Empirical unit hydrograph
2.2.2.4 Unit pulse response function
2.2.3 Linear reservoir
2.2.4 Linear cascade
2.2.5 Linear channel
2.2.6 Time-area diagram
2.3 Random processes and linear systems
2.4 Non-linear systems
2.4.1 Determination of the kernel functions
2.5 Multilinear or parallel systems
2.6 Flood routing
2.6.1 Inventory method
2.6.2 Muskingum method
2.6.2.1 Estimation of the routing parameters K and c
2.6.2.2 Limitations of the Muskingum method
2.6.3 Modified Puls method
2.6.4 Muskingum-Cunge method
2.6.5 Hydraulic approach
2.6.5.1 Solution of the St. Venant equations
2.6.5.2 Diffusion wave approximation
2.6.5.3 Kinematic wave approximation
2.7 Reservoir routing
2.8 Rainfall-runoff modelling
2.8.1 Conceptual-type hydrologic models
2.8.1.1 Stanford watershed model (SWM)
2.8.1.2 Tank model
2.8.1.3 HEC series
2.8.1.4 Xinanjiang model
2.8.1.5 Variable infiltration capacity (VIC) model
2.8.2 Physics-based hydrologic models
2.8.2.1 Système Hydrologique Europèen (SHE) model
2.8.3 Data-driven models
2.8.3.1 Why data-driven models?
2.8.3.2 Types of data-driven models
2.9 Guiding principles and criteria for choosing a model
2.10 Challenges in hydrological modelling
2.11 Concluding remarks
References
3 Population dynamics
3.1 Introduction
3.2 Malthusian growth model
3.3 Verhulst growth model
3.4 Predator-prey (Lotka-Volterra) model
3.5 Gompertz curve
3.6 Logistic map
3.6.1 Specific points in the logistic map
3.7 Cell growth
3.7.1 Cell division
3.7.2 Exponential growth
3.7.3 Cell growth models in a batch (closed system) bioreactor
3.8 Bacterial growth
3.8.1 Binary fission
3.8.2 Monod kinetics
3.9 Radioactive decay and carbon dating
3.10 Concluding remarks
References
4 Reaction kinetics
4.1 Introduction
4.2 Michaelis-Menten equation
4.3 Monod equation
4.4 Concluding remarks
References
5 Water quality systems
5.1 Dissolved oxygen systems
5.1.1 Biochemical oxygen demand (BOD)
5.1.2 Nitrification
5.1.3 Denitrification
5.1.4 Oxygen depletion equation in a river due to a single point source of BOD
5.1.5 Reoxygenation coefficient
5.1.6 Deoxygenation coefficient
5.2 Water quality in a completely mixed water body
5.2.1 Governing equations for a completely mixed system
5.2.2 Step function input
5.2.3 Periodic input function
5.2.4 Fourier series input
5.2.5 General harmonic response
5.2.6 Impulse input
5.2.7 Arbitrary input
5.3 Water quality in rivers and streams
5.3.1 Point sources
5.3.2 Distributed sources
5.3.3 Effect of spatial flow variation
5.3.3.1 Exponential spatial flow variation
5.3.4 Unsteady state
5.3.4.1 Non-dispersive systems
5.3.4.2 Dispersive systems
5.3.5 Tidal reaches
5.3.5.1 Special case of no decay
5.3.5.2 Special case of no dispersion
5.4 Concluding remarks
References
6 Longitudinal dispersion
6.1 Introduction
6.2 Governing equations
6.2.1 Some characteristics of turbulent diffusion
6.2.2 Shear flow dispersion
6.2.3 Taylor's approximation
6.2.4 Turbulent mixing coefficients
6.3 Dispersion coefficient
6.3.1 Routing method
6.3.2 Time scale - dimensionless time
6.4 Numerical solution
6.4.1 Finite difference method
6.4.2 Finite element methods
6.4.3 Moving finite elements
6.5 Dispersion through porous media
6.6 General-purpose water quality models
6.6.1 Enhanced Stream Water Quality Model (QUAL2E)
6.6.2 Water Quality Analysis Simulation Programme (WASP)
6.6.3 One Dimensional Riverine Hydrodynamic and Water Quality Model (EPD-RIV1)
6.7 Concluding remarks
References
7 Time series analysis and forecasting
7.1 Introduction
7.2 Basic properties of a time series
7.2.1 Stationarity
7.2.2 Ergodicity
7.2.3 Homogeneity
7.3 Statistical parameters of a time series
7.3.1 Sample moments
7.3.2 Moving averages - low-pass filtering
7.3.3 Differencing - high-pass filtering
7.3.4 Recursive means and variances
7.4 Tests for stationarity
7.5 Tests for homogeneity
7.5.1 von Neumann ratio
7.5.2 Cumulative deviations
7.5.3 Bayesian statistics
7.5.4 Ratio test
7.5.5 Pettit test
7.6 Components of a time series
7.7 Trend analysis
7.7.1 Tests for randomness and trend
7.7.1.1 Turning point test for randomness
7.7.1.2 Kendall's rank correlation test (t test)
7.7.1.3 Regression test for linear trend
7.7.1.4 Mann-Kendall test
7.7.2 Trend removal
7.7.2.1 Splines
7.8 Periodicity
7.8.1 Harmonic analysis - cumulative periodogram
7.8.2 Autocorrelation analysis
7.8.3 Spectral analysis
7.8.3.1 Hanning method (after J. von Hann)
7.8.3.2 Hamming method (after R.W. Hamming
1983)
7.8.3.3 Lag window method (after Tukey
1965)
7.8.4 Cross correlation
7.8.5 Cross-spectral density function
7.9 Stochastic component
7.9.1 Autoregressive (AR) models
7.9.1.1 Properties of autoregressive models
7.9.1.2 Estimation of parameters
7.9.1.3 First-order model (lag-one Markov model)
7.9.1.4 Second-order model (lag-two model)
7.9.1.5 Partial autocorrelation function (PAF)
7.9.2 Moving average (MA) models
7.9.2.1 Properties of MA models
7.9.2.2 Parameters of MA models
7.9.2.3 MA(1) model
7.9.2.4 MA(2) model
7.9.3 Autoregressive moving average (ARMA) models
7.9.3.1 Properties of ARMA(p
q) models
7.9.3.2 ARMA(1
1) model
7.9.4 Backshift operator
7.9.5 Difference operator
7.9.6 Autoregressive integrated moving average (ARIMA) models
7.10 Residual series
7.10.1 Test of independence
7.10.2 Test of normality
7.10.3 Other distributions
7.10.4 Test for parsimony
7.10.4.1 Akaike information criterion (AIC) and Bayesian information criterion (BIC)
7.10.4.2 Schwartz Bayesian criterion (SBC)
7.11 Forecasting
7.11.1 Minimum mean square error type difference equation
7.11.2 Confidence limits
7.11.3 Forecast errors
7.11.4 Numerical examples of forecasting
7.12 Synthetic data generation
7.13 ARMAX modelling
7.14 Kalman filtering
7.15 Parameter estimation
7.16 Applications
7.17 Concluding remarks
Appendix 7.1: Fourier series representation of a periodic function
References
8 Artificial neural networks
8.1 Introduction
8.2 Origin of artificial neural networks
8.2.1 Biological neuron
8.2.2 Artificial neuron
8.2.2.1 Bias/threshold
8.3 Unconstrained optimization techniques
8.3.1 Method of steepest descent
8.3.2 Newton's method (quadratic approximation)
8.3.3 Gauss-Newton method
8.3.4 LMS algorithm
8.4 Perceptron
8.4.1 Linear separability
8.4.2 'AND'
'OR'
and 'XOR' operations
8.4.3 Multilayer perceptron (MLP)
8.4.4 Optimal structure of an MLP
8.5 Types of activation function
8.5.1 Linear activation function (unbounded)
8.5.2 Saturating activation function (bounded)
8.5.3 Symmetric saturating activation function (bounded)
8.5.4 Positive linear activation function
8.5.5 Hardlimiter (Heaviside function; McCulloch- Pitts model) activation function
8.5.6 Symmetric hardlimiter activation function
8.5.7 Signum function
8.5.8 Triangular activation function
8.5.9 Sigmoid logistic activation function
8.5.10 Sigmoid hyperbolic tangent function
8.5.11 Radial basis functions
8.5.11.1 Multiquadratic
8.5.11.2 Inverse multiquadratic
8.5.11.3 Gaussian
8.5.11.4 Polyharmonic spline function
8.5.11.5 Thin plate spline function
8.5.12 Softmax activation function
8.6 Types of artificial neural networks
8.6.1 Feed-forward neural networks
8.6.2 Recurrent neural networks
8.6.2.1 Back-propagation through time (BPTT)
8.6.3 Self-organizing maps (Kohonen networks)
8.6.4 Product unit-based neural networks (PUNN)
8.6.4.1 Generation of the initial population
8.6.4.2 Fitness function
8.6.4.3 Parametric mutation
8.6.4.4 Structural mutation
8.6.5 Wavelet neural networks
8.7 Learning modes and learning
8.7.1 Learning modes
8.7.2 Types of learning
8.7.2.1 Error correction learning (optimum filtering)
8.7.2.2 Memory-based learning
8.7.2.3 Hebbian learning (Hebb
1949) (unsupervised)
8.7.2.4 Competitive learning (unsupervised)
8.7.2.5 Boltzmann learning
8.7.2.6 Reinforced learning (unsupervised)
8.7.2.7 Hybrid learning
8.7.3 Learning rate (.) and momentum term (a)
8.8 BP algorithm
8.8.1 Generalized delta rule
8.9 ANN implementation details
8.9.1 Data preprocessing: Principal Component Analysis (PCA)
8.9.1.1 Eigenvalue decomposition
8.9.1.2 Deriving the new data set
8.9.2 Data normalization
8.9.3 Choice of input variables
8.9.4 Heuristics for implementation of BP
8.9.5 Stopping criteria
8.9.6 Performance criteria
8.10 Feedback Systems
8.11 Problems and limitations
8.12 Application areas
8.12.1 Hydrological applications
8.12.1.1 River discharge prediction
8.12.2 Environmental applications
8.12.2.1 Algal bloom prediction
Hong Kong
8.13 Concluding remarks
References
9 Radial basis function (RBF) neural networks
9.1 Introduction
9.2 Interpolation
9.3 Regularization
9.4 Generalized RBFs
9.5 Normalized radial basis functions (NRBFs) and kernel regression
9.6 Learning of RBFs
9.6.1 Fixed centres selection (random)
9.6.2 Forward selection
9.6.3 Orthogonal least squares (OLS) algorithm
9.6.3.1 Regularized orthogonal least squares (ROLS) algorithm
9.6.4 Self-organized selection of centres
9.6.5 Supervised selection of centres
9.6.6 Selection of centres using the concept of generalized degrees of freedom
9.6.6.1 Training of RBF networks
9.6.6.2 Computational procedure
9.6.7 Other methods of learning
9.7 Curse of dimensionality
9.8 Performance criteria
9.9 Comparison of MLP versus RBF networks
9.10 Applications
9.11 Concluding remarks
References
10 Fractals and chaos
10.1 Introduction
10.2 Fractal dimensions
10.2.1 Topological dimension
10.2.2 Fractal dimension
10.2.3 Hausdorff dimension
10.2.4 Box-counting dimension
10.2.5 Similarity dimension
10.2.6 Packing dimension
10.2.7 Information dimension
10.2.8 Capacity dimension
10.2.9 Rényi dimension
10.2.10 Correlation dimension
10.3 Examples of some well-known fractals
10.3.1 Cantor set
10.3.2 Sierpinski (gasket) triangle
10.3.3 Koch curve
10.3.4 Koch snowflake (or Koch star)
10.3.5 Mandelbrot set
10.3.6 Julia set
10.4 Perimeter-area relationship of fractals
10.5 Chaos
10.5.1 Butterfly effect
10.5.2 The n-body problem
10.6 Some definitions
10.6.1 Metric space
10.6.2 Manifold
10.6.3 Map
10.6.4 Attractor
10.6.4.1 Strange attractor
10.6.5 Dynamical system
10.6.6 Phase (or state) space
10.7 Invariants of chaotic systems
10.7.1 Lyapunov exponent
10.7.2 Entropy of a dynamical system
10.7.2.1 Kolmogorov-Sinai (K-S) entropy
10.7.2.2 Modified correlation entropy
10.7.2.3 K-S entropy and the Lyapunov spectrum
10.8 Examples of known chaotic attractors
10.8.1 Logistic map
10.8.1.1 Bifurcation
10.8.2 Hénon map
10.8.3 Lorenz map
10.8.4 Duffing equation
10.8.5 Rössler equations
10.8.6 Chua's equation
10.9 Applications areas of chaos
10.10 Concluding remarks
References
11 Dynamical systems approach of modelling
11.1 Introduction
11.2 Random versus chaotic deterministic systems
11.3 Time series as a dynamical system
11.3.1 Dynamical system
11.3.2 Sensitivity to initial conditions
11.4 Embedding
11.4.1 Embedding theorem
11.4.2 Embedding dimension
11.4.2.1 False nearest neighbour (FNN) method
11.4.2.2 Singular value decomposition (SVD)
11.4.3 Delay time
11.4.3.1 Average mutual information
11.4.4 Irregular embeddings
11.5 Phase (or state) space reconstruction
11.6 Phase space prediction
11.7 Inverse problem
11.7.1 Prediction error
11.8 Non-linearity and determinism
11.8.1 Test for non-linearity
11.8.1.1 Significance
11.8.1.2 Test statistics
11.8.1.3 Method of surrogate data
11.8.1.4 Null hypotheses
11.8.2 Test for determinism
11.9 Noise and noise reduction
11.9.1 Noise in data
11.9.2 Noise reduction
11.9.3 Noise level
11.10 Application areas
11.11 Concluding remarks
Appendices
Appendix 11.1: Derivation of Equation 11.81
Appendix 11.2: Proof of Equation 11.82b
Appendix 11.3: Proof of Equation A1-4
References
12 Support vector machines
12.1 Introduction
12.2 Linearly separable binary classificatio
12.3 Soft-margin binary classification
12.3.1 Linear soft margin
12.3.2 Non-linear classification
12.4 Support vector regression
12.4.1 Linear support vector regression
12.4.2 Non-linear support vector regression
12.5 Parameter selection
12.6 Kernel tricks
12.7 Quadratic programming
12.8 Limitations and problems
12.9 Application areas
12.10 Concluding remarks
Appendix 12.1: Statistical learning
Empirical risk minimization (ERM)
Structural risk minimization (SRM)
Appendix 12.2: Karush-Kuhn-Tucker (KKT) conditions
References
13 Fuzzy logic systems
13.1 Introduction
13.2 Fuzzy sets and fuzzy operations
13.2.1 Fuzzy sets
13.2.2 Logical operators AND
OR
and NOT
13.2.2.1 Intersection
13.2.2.2 Union
13.2.2.3 Other useful definitions
13.2.3 Linguistic variables
13.3 Membership functions
13.3.1 Triangular
13.3.2 Trapezoidal
13.3.3 Gaussian
13.3.4 Asymmetric Gaussian
13.3.5 Generalized bell-shaped Gaussian
13.3.6 Sigmoidal
13.3.7 Singleton
13.4 Fuzzy rules
13.5 Fuzzy inference
13.5.1 Fuzzy or approximate reasoning
13.5.2 Mamdani fuzzy inference system
13.5.2.1 Fuzzification of input
13.5.2.2 Application of fuzzy operators 'AND' or 'OR'
13.5.2.3 Implication from antecedent to consequent
13.5.2.4 Aggregation of consequents across the rules
13.5.2.5 Defuzzification
13.5.3 Takagi-Sugeno-Kang (TSK) fuzzy inference system
13.5.3.1 Clustering
13.5.4 Tsukamoto inference system
13.5.5 Larsen inference system
13.6 Neuro-fuzzy system
13.6.1 Types of neuro-fuzzy systems
13.6.1.1 Umano and Ezawa (1991) fuzzy-neural model
13.7 Adaptive neuro-fuzzy inference systems (ANFIS)
13.7.1 Hybrid learning
13.8 Application areas
13.9 Concluding remarks
References
14 Genetic algorithms (GAs) and genetic programming (GP)
14.1 Introduction
14.2 Coding
14.3 Genetic operators
14.4 Parameters of GA
14.5 Genetic programming (GP)
14.6 Application areas
14.7 Concluding remarks
References
Index
Author
1 Introduction
1.1 Some definitions
1.1.1 System
1.1.2 State of a system
1.2 General systems theory (GST)
1.3 Ecological systems (Ecosystems)
1.4 Equi-finality
1.5 Scope and layout
References
2 Historical development of hydrological modelling
2.1 Basic concepts and governing equation of linear systems
2.1.1 Time domain analysis
2.1.1.1 Types of input functions
2.1.1.2 System response function - convolution integral
2.1.2 Frequency domain analysis
2.1.2.1 Fourier transform - frequency response function (FRF)
2.1.2.2 Laplace transform
2.1.2.3 z-Transform
2.2 Linear systems in hydrological modelling
2.2.1 Hydrological systems
2.2.2 Unit hydrograph
2.2.2.1 Unit hydrograph for a complex storm
2.2.2.2 Instantaneous unit hydrograph (IUH)
2.2.2.3 Empirical unit hydrograph
2.2.2.4 Unit pulse response function
2.2.3 Linear reservoir
2.2.4 Linear cascade
2.2.5 Linear channel
2.2.6 Time-area diagram
2.3 Random processes and linear systems
2.4 Non-linear systems
2.4.1 Determination of the kernel functions
2.5 Multilinear or parallel systems
2.6 Flood routing
2.6.1 Inventory method
2.6.2 Muskingum method
2.6.2.1 Estimation of the routing parameters K and c
2.6.2.2 Limitations of the Muskingum method
2.6.3 Modified Puls method
2.6.4 Muskingum-Cunge method
2.6.5 Hydraulic approach
2.6.5.1 Solution of the St. Venant equations
2.6.5.2 Diffusion wave approximation
2.6.5.3 Kinematic wave approximation
2.7 Reservoir routing
2.8 Rainfall-runoff modelling
2.8.1 Conceptual-type hydrologic models
2.8.1.1 Stanford watershed model (SWM)
2.8.1.2 Tank model
2.8.1.3 HEC series
2.8.1.4 Xinanjiang model
2.8.1.5 Variable infiltration capacity (VIC) model
2.8.2 Physics-based hydrologic models
2.8.2.1 Système Hydrologique Europèen (SHE) model
2.8.3 Data-driven models
2.8.3.1 Why data-driven models?
2.8.3.2 Types of data-driven models
2.9 Guiding principles and criteria for choosing a model
2.10 Challenges in hydrological modelling
2.11 Concluding remarks
References
3 Population dynamics
3.1 Introduction
3.2 Malthusian growth model
3.3 Verhulst growth model
3.4 Predator-prey (Lotka-Volterra) model
3.5 Gompertz curve
3.6 Logistic map
3.6.1 Specific points in the logistic map
3.7 Cell growth
3.7.1 Cell division
3.7.2 Exponential growth
3.7.3 Cell growth models in a batch (closed system) bioreactor
3.8 Bacterial growth
3.8.1 Binary fission
3.8.2 Monod kinetics
3.9 Radioactive decay and carbon dating
3.10 Concluding remarks
References
4 Reaction kinetics
4.1 Introduction
4.2 Michaelis-Menten equation
4.3 Monod equation
4.4 Concluding remarks
References
5 Water quality systems
5.1 Dissolved oxygen systems
5.1.1 Biochemical oxygen demand (BOD)
5.1.2 Nitrification
5.1.3 Denitrification
5.1.4 Oxygen depletion equation in a river due to a single point source of BOD
5.1.5 Reoxygenation coefficient
5.1.6 Deoxygenation coefficient
5.2 Water quality in a completely mixed water body
5.2.1 Governing equations for a completely mixed system
5.2.2 Step function input
5.2.3 Periodic input function
5.2.4 Fourier series input
5.2.5 General harmonic response
5.2.6 Impulse input
5.2.7 Arbitrary input
5.3 Water quality in rivers and streams
5.3.1 Point sources
5.3.2 Distributed sources
5.3.3 Effect of spatial flow variation
5.3.3.1 Exponential spatial flow variation
5.3.4 Unsteady state
5.3.4.1 Non-dispersive systems
5.3.4.2 Dispersive systems
5.3.5 Tidal reaches
5.3.5.1 Special case of no decay
5.3.5.2 Special case of no dispersion
5.4 Concluding remarks
References
6 Longitudinal dispersion
6.1 Introduction
6.2 Governing equations
6.2.1 Some characteristics of turbulent diffusion
6.2.2 Shear flow dispersion
6.2.3 Taylor's approximation
6.2.4 Turbulent mixing coefficients
6.3 Dispersion coefficient
6.3.1 Routing method
6.3.2 Time scale - dimensionless time
6.4 Numerical solution
6.4.1 Finite difference method
6.4.2 Finite element methods
6.4.3 Moving finite elements
6.5 Dispersion through porous media
6.6 General-purpose water quality models
6.6.1 Enhanced Stream Water Quality Model (QUAL2E)
6.6.2 Water Quality Analysis Simulation Programme (WASP)
6.6.3 One Dimensional Riverine Hydrodynamic and Water Quality Model (EPD-RIV1)
6.7 Concluding remarks
References
7 Time series analysis and forecasting
7.1 Introduction
7.2 Basic properties of a time series
7.2.1 Stationarity
7.2.2 Ergodicity
7.2.3 Homogeneity
7.3 Statistical parameters of a time series
7.3.1 Sample moments
7.3.2 Moving averages - low-pass filtering
7.3.3 Differencing - high-pass filtering
7.3.4 Recursive means and variances
7.4 Tests for stationarity
7.5 Tests for homogeneity
7.5.1 von Neumann ratio
7.5.2 Cumulative deviations
7.5.3 Bayesian statistics
7.5.4 Ratio test
7.5.5 Pettit test
7.6 Components of a time series
7.7 Trend analysis
7.7.1 Tests for randomness and trend
7.7.1.1 Turning point test for randomness
7.7.1.2 Kendall's rank correlation test (t test)
7.7.1.3 Regression test for linear trend
7.7.1.4 Mann-Kendall test
7.7.2 Trend removal
7.7.2.1 Splines
7.8 Periodicity
7.8.1 Harmonic analysis - cumulative periodogram
7.8.2 Autocorrelation analysis
7.8.3 Spectral analysis
7.8.3.1 Hanning method (after J. von Hann)
7.8.3.2 Hamming method (after R.W. Hamming
1983)
7.8.3.3 Lag window method (after Tukey
1965)
7.8.4 Cross correlation
7.8.5 Cross-spectral density function
7.9 Stochastic component
7.9.1 Autoregressive (AR) models
7.9.1.1 Properties of autoregressive models
7.9.1.2 Estimation of parameters
7.9.1.3 First-order model (lag-one Markov model)
7.9.1.4 Second-order model (lag-two model)
7.9.1.5 Partial autocorrelation function (PAF)
7.9.2 Moving average (MA) models
7.9.2.1 Properties of MA models
7.9.2.2 Parameters of MA models
7.9.2.3 MA(1) model
7.9.2.4 MA(2) model
7.9.3 Autoregressive moving average (ARMA) models
7.9.3.1 Properties of ARMA(p
q) models
7.9.3.2 ARMA(1
1) model
7.9.4 Backshift operator
7.9.5 Difference operator
7.9.6 Autoregressive integrated moving average (ARIMA) models
7.10 Residual series
7.10.1 Test of independence
7.10.2 Test of normality
7.10.3 Other distributions
7.10.4 Test for parsimony
7.10.4.1 Akaike information criterion (AIC) and Bayesian information criterion (BIC)
7.10.4.2 Schwartz Bayesian criterion (SBC)
7.11 Forecasting
7.11.1 Minimum mean square error type difference equation
7.11.2 Confidence limits
7.11.3 Forecast errors
7.11.4 Numerical examples of forecasting
7.12 Synthetic data generation
7.13 ARMAX modelling
7.14 Kalman filtering
7.15 Parameter estimation
7.16 Applications
7.17 Concluding remarks
Appendix 7.1: Fourier series representation of a periodic function
References
8 Artificial neural networks
8.1 Introduction
8.2 Origin of artificial neural networks
8.2.1 Biological neuron
8.2.2 Artificial neuron
8.2.2.1 Bias/threshold
8.3 Unconstrained optimization techniques
8.3.1 Method of steepest descent
8.3.2 Newton's method (quadratic approximation)
8.3.3 Gauss-Newton method
8.3.4 LMS algorithm
8.4 Perceptron
8.4.1 Linear separability
8.4.2 'AND'
'OR'
and 'XOR' operations
8.4.3 Multilayer perceptron (MLP)
8.4.4 Optimal structure of an MLP
8.5 Types of activation function
8.5.1 Linear activation function (unbounded)
8.5.2 Saturating activation function (bounded)
8.5.3 Symmetric saturating activation function (bounded)
8.5.4 Positive linear activation function
8.5.5 Hardlimiter (Heaviside function; McCulloch- Pitts model) activation function
8.5.6 Symmetric hardlimiter activation function
8.5.7 Signum function
8.5.8 Triangular activation function
8.5.9 Sigmoid logistic activation function
8.5.10 Sigmoid hyperbolic tangent function
8.5.11 Radial basis functions
8.5.11.1 Multiquadratic
8.5.11.2 Inverse multiquadratic
8.5.11.3 Gaussian
8.5.11.4 Polyharmonic spline function
8.5.11.5 Thin plate spline function
8.5.12 Softmax activation function
8.6 Types of artificial neural networks
8.6.1 Feed-forward neural networks
8.6.2 Recurrent neural networks
8.6.2.1 Back-propagation through time (BPTT)
8.6.3 Self-organizing maps (Kohonen networks)
8.6.4 Product unit-based neural networks (PUNN)
8.6.4.1 Generation of the initial population
8.6.4.2 Fitness function
8.6.4.3 Parametric mutation
8.6.4.4 Structural mutation
8.6.5 Wavelet neural networks
8.7 Learning modes and learning
8.7.1 Learning modes
8.7.2 Types of learning
8.7.2.1 Error correction learning (optimum filtering)
8.7.2.2 Memory-based learning
8.7.2.3 Hebbian learning (Hebb
1949) (unsupervised)
8.7.2.4 Competitive learning (unsupervised)
8.7.2.5 Boltzmann learning
8.7.2.6 Reinforced learning (unsupervised)
8.7.2.7 Hybrid learning
8.7.3 Learning rate (.) and momentum term (a)
8.8 BP algorithm
8.8.1 Generalized delta rule
8.9 ANN implementation details
8.9.1 Data preprocessing: Principal Component Analysis (PCA)
8.9.1.1 Eigenvalue decomposition
8.9.1.2 Deriving the new data set
8.9.2 Data normalization
8.9.3 Choice of input variables
8.9.4 Heuristics for implementation of BP
8.9.5 Stopping criteria
8.9.6 Performance criteria
8.10 Feedback Systems
8.11 Problems and limitations
8.12 Application areas
8.12.1 Hydrological applications
8.12.1.1 River discharge prediction
8.12.2 Environmental applications
8.12.2.1 Algal bloom prediction
Hong Kong
8.13 Concluding remarks
References
9 Radial basis function (RBF) neural networks
9.1 Introduction
9.2 Interpolation
9.3 Regularization
9.4 Generalized RBFs
9.5 Normalized radial basis functions (NRBFs) and kernel regression
9.6 Learning of RBFs
9.6.1 Fixed centres selection (random)
9.6.2 Forward selection
9.6.3 Orthogonal least squares (OLS) algorithm
9.6.3.1 Regularized orthogonal least squares (ROLS) algorithm
9.6.4 Self-organized selection of centres
9.6.5 Supervised selection of centres
9.6.6 Selection of centres using the concept of generalized degrees of freedom
9.6.6.1 Training of RBF networks
9.6.6.2 Computational procedure
9.6.7 Other methods of learning
9.7 Curse of dimensionality
9.8 Performance criteria
9.9 Comparison of MLP versus RBF networks
9.10 Applications
9.11 Concluding remarks
References
10 Fractals and chaos
10.1 Introduction
10.2 Fractal dimensions
10.2.1 Topological dimension
10.2.2 Fractal dimension
10.2.3 Hausdorff dimension
10.2.4 Box-counting dimension
10.2.5 Similarity dimension
10.2.6 Packing dimension
10.2.7 Information dimension
10.2.8 Capacity dimension
10.2.9 Rényi dimension
10.2.10 Correlation dimension
10.3 Examples of some well-known fractals
10.3.1 Cantor set
10.3.2 Sierpinski (gasket) triangle
10.3.3 Koch curve
10.3.4 Koch snowflake (or Koch star)
10.3.5 Mandelbrot set
10.3.6 Julia set
10.4 Perimeter-area relationship of fractals
10.5 Chaos
10.5.1 Butterfly effect
10.5.2 The n-body problem
10.6 Some definitions
10.6.1 Metric space
10.6.2 Manifold
10.6.3 Map
10.6.4 Attractor
10.6.4.1 Strange attractor
10.6.5 Dynamical system
10.6.6 Phase (or state) space
10.7 Invariants of chaotic systems
10.7.1 Lyapunov exponent
10.7.2 Entropy of a dynamical system
10.7.2.1 Kolmogorov-Sinai (K-S) entropy
10.7.2.2 Modified correlation entropy
10.7.2.3 K-S entropy and the Lyapunov spectrum
10.8 Examples of known chaotic attractors
10.8.1 Logistic map
10.8.1.1 Bifurcation
10.8.2 Hénon map
10.8.3 Lorenz map
10.8.4 Duffing equation
10.8.5 Rössler equations
10.8.6 Chua's equation
10.9 Applications areas of chaos
10.10 Concluding remarks
References
11 Dynamical systems approach of modelling
11.1 Introduction
11.2 Random versus chaotic deterministic systems
11.3 Time series as a dynamical system
11.3.1 Dynamical system
11.3.2 Sensitivity to initial conditions
11.4 Embedding
11.4.1 Embedding theorem
11.4.2 Embedding dimension
11.4.2.1 False nearest neighbour (FNN) method
11.4.2.2 Singular value decomposition (SVD)
11.4.3 Delay time
11.4.3.1 Average mutual information
11.4.4 Irregular embeddings
11.5 Phase (or state) space reconstruction
11.6 Phase space prediction
11.7 Inverse problem
11.7.1 Prediction error
11.8 Non-linearity and determinism
11.8.1 Test for non-linearity
11.8.1.1 Significance
11.8.1.2 Test statistics
11.8.1.3 Method of surrogate data
11.8.1.4 Null hypotheses
11.8.2 Test for determinism
11.9 Noise and noise reduction
11.9.1 Noise in data
11.9.2 Noise reduction
11.9.3 Noise level
11.10 Application areas
11.11 Concluding remarks
Appendices
Appendix 11.1: Derivation of Equation 11.81
Appendix 11.2: Proof of Equation 11.82b
Appendix 11.3: Proof of Equation A1-4
References
12 Support vector machines
12.1 Introduction
12.2 Linearly separable binary classificatio
12.3 Soft-margin binary classification
12.3.1 Linear soft margin
12.3.2 Non-linear classification
12.4 Support vector regression
12.4.1 Linear support vector regression
12.4.2 Non-linear support vector regression
12.5 Parameter selection
12.6 Kernel tricks
12.7 Quadratic programming
12.8 Limitations and problems
12.9 Application areas
12.10 Concluding remarks
Appendix 12.1: Statistical learning
Empirical risk minimization (ERM)
Structural risk minimization (SRM)
Appendix 12.2: Karush-Kuhn-Tucker (KKT) conditions
References
13 Fuzzy logic systems
13.1 Introduction
13.2 Fuzzy sets and fuzzy operations
13.2.1 Fuzzy sets
13.2.2 Logical operators AND
OR
and NOT
13.2.2.1 Intersection
13.2.2.2 Union
13.2.2.3 Other useful definitions
13.2.3 Linguistic variables
13.3 Membership functions
13.3.1 Triangular
13.3.2 Trapezoidal
13.3.3 Gaussian
13.3.4 Asymmetric Gaussian
13.3.5 Generalized bell-shaped Gaussian
13.3.6 Sigmoidal
13.3.7 Singleton
13.4 Fuzzy rules
13.5 Fuzzy inference
13.5.1 Fuzzy or approximate reasoning
13.5.2 Mamdani fuzzy inference system
13.5.2.1 Fuzzification of input
13.5.2.2 Application of fuzzy operators 'AND' or 'OR'
13.5.2.3 Implication from antecedent to consequent
13.5.2.4 Aggregation of consequents across the rules
13.5.2.5 Defuzzification
13.5.3 Takagi-Sugeno-Kang (TSK) fuzzy inference system
13.5.3.1 Clustering
13.5.4 Tsukamoto inference system
13.5.5 Larsen inference system
13.6 Neuro-fuzzy system
13.6.1 Types of neuro-fuzzy systems
13.6.1.1 Umano and Ezawa (1991) fuzzy-neural model
13.7 Adaptive neuro-fuzzy inference systems (ANFIS)
13.7.1 Hybrid learning
13.8 Application areas
13.9 Concluding remarks
References
14 Genetic algorithms (GAs) and genetic programming (GP)
14.1 Introduction
14.2 Coding
14.3 Genetic operators
14.4 Parameters of GA
14.5 Genetic programming (GP)
14.6 Application areas
14.7 Concluding remarks
References
Index
Preface
Author
1 Introduction
1.1 Some definitions
1.1.1 System
1.1.2 State of a system
1.2 General systems theory (GST)
1.3 Ecological systems (Ecosystems)
1.4 Equi-finality
1.5 Scope and layout
References
2 Historical development of hydrological modelling
2.1 Basic concepts and governing equation of linear systems
2.1.1 Time domain analysis
2.1.1.1 Types of input functions
2.1.1.2 System response function - convolution integral
2.1.2 Frequency domain analysis
2.1.2.1 Fourier transform - frequency response function (FRF)
2.1.2.2 Laplace transform
2.1.2.3 z-Transform
2.2 Linear systems in hydrological modelling
2.2.1 Hydrological systems
2.2.2 Unit hydrograph
2.2.2.1 Unit hydrograph for a complex storm
2.2.2.2 Instantaneous unit hydrograph (IUH)
2.2.2.3 Empirical unit hydrograph
2.2.2.4 Unit pulse response function
2.2.3 Linear reservoir
2.2.4 Linear cascade
2.2.5 Linear channel
2.2.6 Time-area diagram
2.3 Random processes and linear systems
2.4 Non-linear systems
2.4.1 Determination of the kernel functions
2.5 Multilinear or parallel systems
2.6 Flood routing
2.6.1 Inventory method
2.6.2 Muskingum method
2.6.2.1 Estimation of the routing parameters K and c
2.6.2.2 Limitations of the Muskingum method
2.6.3 Modified Puls method
2.6.4 Muskingum-Cunge method
2.6.5 Hydraulic approach
2.6.5.1 Solution of the St. Venant equations
2.6.5.2 Diffusion wave approximation
2.6.5.3 Kinematic wave approximation
2.7 Reservoir routing
2.8 Rainfall-runoff modelling
2.8.1 Conceptual-type hydrologic models
2.8.1.1 Stanford watershed model (SWM)
2.8.1.2 Tank model
2.8.1.3 HEC series
2.8.1.4 Xinanjiang model
2.8.1.5 Variable infiltration capacity (VIC) model
2.8.2 Physics-based hydrologic models
2.8.2.1 Système Hydrologique Europèen (SHE) model
2.8.3 Data-driven models
2.8.3.1 Why data-driven models?
2.8.3.2 Types of data-driven models
2.9 Guiding principles and criteria for choosing a model
2.10 Challenges in hydrological modelling
2.11 Concluding remarks
References
3 Population dynamics
3.1 Introduction
3.2 Malthusian growth model
3.3 Verhulst growth model
3.4 Predator-prey (Lotka-Volterra) model
3.5 Gompertz curve
3.6 Logistic map
3.6.1 Specific points in the logistic map
3.7 Cell growth
3.7.1 Cell division
3.7.2 Exponential growth
3.7.3 Cell growth models in a batch (closed system) bioreactor
3.8 Bacterial growth
3.8.1 Binary fission
3.8.2 Monod kinetics
3.9 Radioactive decay and carbon dating
3.10 Concluding remarks
References
4 Reaction kinetics
4.1 Introduction
4.2 Michaelis-Menten equation
4.3 Monod equation
4.4 Concluding remarks
References
5 Water quality systems
5.1 Dissolved oxygen systems
5.1.1 Biochemical oxygen demand (BOD)
5.1.2 Nitrification
5.1.3 Denitrification
5.1.4 Oxygen depletion equation in a river due to a single point source of BOD
5.1.5 Reoxygenation coefficient
5.1.6 Deoxygenation coefficient
5.2 Water quality in a completely mixed water body
5.2.1 Governing equations for a completely mixed system
5.2.2 Step function input
5.2.3 Periodic input function
5.2.4 Fourier series input
5.2.5 General harmonic response
5.2.6 Impulse input
5.2.7 Arbitrary input
5.3 Water quality in rivers and streams
5.3.1 Point sources
5.3.2 Distributed sources
5.3.3 Effect of spatial flow variation
5.3.3.1 Exponential spatial flow variation
5.3.4 Unsteady state
5.3.4.1 Non-dispersive systems
5.3.4.2 Dispersive systems
5.3.5 Tidal reaches
5.3.5.1 Special case of no decay
5.3.5.2 Special case of no dispersion
5.4 Concluding remarks
References
6 Longitudinal dispersion
6.1 Introduction
6.2 Governing equations
6.2.1 Some characteristics of turbulent diffusion
6.2.2 Shear flow dispersion
6.2.3 Taylor's approximation
6.2.4 Turbulent mixing coefficients
6.3 Dispersion coefficient
6.3.1 Routing method
6.3.2 Time scale - dimensionless time
6.4 Numerical solution
6.4.1 Finite difference method
6.4.2 Finite element methods
6.4.3 Moving finite elements
6.5 Dispersion through porous media
6.6 General-purpose water quality models
6.6.1 Enhanced Stream Water Quality Model (QUAL2E)
6.6.2 Water Quality Analysis Simulation Programme (WASP)
6.6.3 One Dimensional Riverine Hydrodynamic and Water Quality Model (EPD-RIV1)
6.7 Concluding remarks
References
7 Time series analysis and forecasting
7.1 Introduction
7.2 Basic properties of a time series
7.2.1 Stationarity
7.2.2 Ergodicity
7.2.3 Homogeneity
7.3 Statistical parameters of a time series
7.3.1 Sample moments
7.3.2 Moving averages - low-pass filtering
7.3.3 Differencing - high-pass filtering
7.3.4 Recursive means and variances
7.4 Tests for stationarity
7.5 Tests for homogeneity
7.5.1 von Neumann ratio
7.5.2 Cumulative deviations
7.5.3 Bayesian statistics
7.5.4 Ratio test
7.5.5 Pettit test
7.6 Components of a time series
7.7 Trend analysis
7.7.1 Tests for randomness and trend
7.7.1.1 Turning point test for randomness
7.7.1.2 Kendall's rank correlation test (t test)
7.7.1.3 Regression test for linear trend
7.7.1.4 Mann-Kendall test
7.7.2 Trend removal
7.7.2.1 Splines
7.8 Periodicity
7.8.1 Harmonic analysis - cumulative periodogram
7.8.2 Autocorrelation analysis
7.8.3 Spectral analysis
7.8.3.1 Hanning method (after J. von Hann)
7.8.3.2 Hamming method (after R.W. Hamming
1983)
7.8.3.3 Lag window method (after Tukey
1965)
7.8.4 Cross correlation
7.8.5 Cross-spectral density function
7.9 Stochastic component
7.9.1 Autoregressive (AR) models
7.9.1.1 Properties of autoregressive models
7.9.1.2 Estimation of parameters
7.9.1.3 First-order model (lag-one Markov model)
7.9.1.4 Second-order model (lag-two model)
7.9.1.5 Partial autocorrelation function (PAF)
7.9.2 Moving average (MA) models
7.9.2.1 Properties of MA models
7.9.2.2 Parameters of MA models
7.9.2.3 MA(1) model
7.9.2.4 MA(2) model
7.9.3 Autoregressive moving average (ARMA) models
7.9.3.1 Properties of ARMA(p
q) models
7.9.3.2 ARMA(1
1) model
7.9.4 Backshift operator
7.9.5 Difference operator
7.9.6 Autoregressive integrated moving average (ARIMA) models
7.10 Residual series
7.10.1 Test of independence
7.10.2 Test of normality
7.10.3 Other distributions
7.10.4 Test for parsimony
7.10.4.1 Akaike information criterion (AIC) and Bayesian information criterion (BIC)
7.10.4.2 Schwartz Bayesian criterion (SBC)
7.11 Forecasting
7.11.1 Minimum mean square error type difference equation
7.11.2 Confidence limits
7.11.3 Forecast errors
7.11.4 Numerical examples of forecasting
7.12 Synthetic data generation
7.13 ARMAX modelling
7.14 Kalman filtering
7.15 Parameter estimation
7.16 Applications
7.17 Concluding remarks
Appendix 7.1: Fourier series representation of a periodic function
References
8 Artificial neural networks
8.1 Introduction
8.2 Origin of artificial neural networks
8.2.1 Biological neuron
8.2.2 Artificial neuron
8.2.2.1 Bias/threshold
8.3 Unconstrained optimization techniques
8.3.1 Method of steepest descent
8.3.2 Newton's method (quadratic approximation)
8.3.3 Gauss-Newton method
8.3.4 LMS algorithm
8.4 Perceptron
8.4.1 Linear separability
8.4.2 'AND'
'OR'
and 'XOR' operations
8.4.3 Multilayer perceptron (MLP)
8.4.4 Optimal structure of an MLP
8.5 Types of activation function
8.5.1 Linear activation function (unbounded)
8.5.2 Saturating activation function (bounded)
8.5.3 Symmetric saturating activation function (bounded)
8.5.4 Positive linear activation function
8.5.5 Hardlimiter (Heaviside function; McCulloch- Pitts model) activation function
8.5.6 Symmetric hardlimiter activation function
8.5.7 Signum function
8.5.8 Triangular activation function
8.5.9 Sigmoid logistic activation function
8.5.10 Sigmoid hyperbolic tangent function
8.5.11 Radial basis functions
8.5.11.1 Multiquadratic
8.5.11.2 Inverse multiquadratic
8.5.11.3 Gaussian
8.5.11.4 Polyharmonic spline function
8.5.11.5 Thin plate spline function
8.5.12 Softmax activation function
8.6 Types of artificial neural networks
8.6.1 Feed-forward neural networks
8.6.2 Recurrent neural networks
8.6.2.1 Back-propagation through time (BPTT)
8.6.3 Self-organizing maps (Kohonen networks)
8.6.4 Product unit-based neural networks (PUNN)
8.6.4.1 Generation of the initial population
8.6.4.2 Fitness function
8.6.4.3 Parametric mutation
8.6.4.4 Structural mutation
8.6.5 Wavelet neural networks
8.7 Learning modes and learning
8.7.1 Learning modes
8.7.2 Types of learning
8.7.2.1 Error correction learning (optimum filtering)
8.7.2.2 Memory-based learning
8.7.2.3 Hebbian learning (Hebb
1949) (unsupervised)
8.7.2.4 Competitive learning (unsupervised)
8.7.2.5 Boltzmann learning
8.7.2.6 Reinforced learning (unsupervised)
8.7.2.7 Hybrid learning
8.7.3 Learning rate (.) and momentum term (a)
8.8 BP algorithm
8.8.1 Generalized delta rule
8.9 ANN implementation details
8.9.1 Data preprocessing: Principal Component Analysis (PCA)
8.9.1.1 Eigenvalue decomposition
8.9.1.2 Deriving the new data set
8.9.2 Data normalization
8.9.3 Choice of input variables
8.9.4 Heuristics for implementation of BP
8.9.5 Stopping criteria
8.9.6 Performance criteria
8.10 Feedback Systems
8.11 Problems and limitations
8.12 Application areas
8.12.1 Hydrological applications
8.12.1.1 River discharge prediction
8.12.2 Environmental applications
8.12.2.1 Algal bloom prediction
Hong Kong
8.13 Concluding remarks
References
9 Radial basis function (RBF) neural networks
9.1 Introduction
9.2 Interpolation
9.3 Regularization
9.4 Generalized RBFs
9.5 Normalized radial basis functions (NRBFs) and kernel regression
9.6 Learning of RBFs
9.6.1 Fixed centres selection (random)
9.6.2 Forward selection
9.6.3 Orthogonal least squares (OLS) algorithm
9.6.3.1 Regularized orthogonal least squares (ROLS) algorithm
9.6.4 Self-organized selection of centres
9.6.5 Supervised selection of centres
9.6.6 Selection of centres using the concept of generalized degrees of freedom
9.6.6.1 Training of RBF networks
9.6.6.2 Computational procedure
9.6.7 Other methods of learning
9.7 Curse of dimensionality
9.8 Performance criteria
9.9 Comparison of MLP versus RBF networks
9.10 Applications
9.11 Concluding remarks
References
10 Fractals and chaos
10.1 Introduction
10.2 Fractal dimensions
10.2.1 Topological dimension
10.2.2 Fractal dimension
10.2.3 Hausdorff dimension
10.2.4 Box-counting dimension
10.2.5 Similarity dimension
10.2.6 Packing dimension
10.2.7 Information dimension
10.2.8 Capacity dimension
10.2.9 Rényi dimension
10.2.10 Correlation dimension
10.3 Examples of some well-known fractals
10.3.1 Cantor set
10.3.2 Sierpinski (gasket) triangle
10.3.3 Koch curve
10.3.4 Koch snowflake (or Koch star)
10.3.5 Mandelbrot set
10.3.6 Julia set
10.4 Perimeter-area relationship of fractals
10.5 Chaos
10.5.1 Butterfly effect
10.5.2 The n-body problem
10.6 Some definitions
10.6.1 Metric space
10.6.2 Manifold
10.6.3 Map
10.6.4 Attractor
10.6.4.1 Strange attractor
10.6.5 Dynamical system
10.6.6 Phase (or state) space
10.7 Invariants of chaotic systems
10.7.1 Lyapunov exponent
10.7.2 Entropy of a dynamical system
10.7.2.1 Kolmogorov-Sinai (K-S) entropy
10.7.2.2 Modified correlation entropy
10.7.2.3 K-S entropy and the Lyapunov spectrum
10.8 Examples of known chaotic attractors
10.8.1 Logistic map
10.8.1.1 Bifurcation
10.8.2 Hénon map
10.8.3 Lorenz map
10.8.4 Duffing equation
10.8.5 Rössler equations
10.8.6 Chua's equation
10.9 Applications areas of chaos
10.10 Concluding remarks
References
11 Dynamical systems approach of modelling
11.1 Introduction
11.2 Random versus chaotic deterministic systems
11.3 Time series as a dynamical system
11.3.1 Dynamical system
11.3.2 Sensitivity to initial conditions
11.4 Embedding
11.4.1 Embedding theorem
11.4.2 Embedding dimension
11.4.2.1 False nearest neighbour (FNN) method
11.4.2.2 Singular value decomposition (SVD)
11.4.3 Delay time
11.4.3.1 Average mutual information
11.4.4 Irregular embeddings
11.5 Phase (or state) space reconstruction
11.6 Phase space prediction
11.7 Inverse problem
11.7.1 Prediction error
11.8 Non-linearity and determinism
11.8.1 Test for non-linearity
11.8.1.1 Significance
11.8.1.2 Test statistics
11.8.1.3 Method of surrogate data
11.8.1.4 Null hypotheses
11.8.2 Test for determinism
11.9 Noise and noise reduction
11.9.1 Noise in data
11.9.2 Noise reduction
11.9.3 Noise level
11.10 Application areas
11.11 Concluding remarks
Appendices
Appendix 11.1: Derivation of Equation 11.81
Appendix 11.2: Proof of Equation 11.82b
Appendix 11.3: Proof of Equation A1-4
References
12 Support vector machines
12.1 Introduction
12.2 Linearly separable binary classificatio
12.3 Soft-margin binary classification
12.3.1 Linear soft margin
12.3.2 Non-linear classification
12.4 Support vector regression
12.4.1 Linear support vector regression
12.4.2 Non-linear support vector regression
12.5 Parameter selection
12.6 Kernel tricks
12.7 Quadratic programming
12.8 Limitations and problems
12.9 Application areas
12.10 Concluding remarks
Appendix 12.1: Statistical learning
Empirical risk minimization (ERM)
Structural risk minimization (SRM)
Appendix 12.2: Karush-Kuhn-Tucker (KKT) conditions
References
13 Fuzzy logic systems
13.1 Introduction
13.2 Fuzzy sets and fuzzy operations
13.2.1 Fuzzy sets
13.2.2 Logical operators AND
OR
and NOT
13.2.2.1 Intersection
13.2.2.2 Union
13.2.2.3 Other useful definitions
13.2.3 Linguistic variables
13.3 Membership functions
13.3.1 Triangular
13.3.2 Trapezoidal
13.3.3 Gaussian
13.3.4 Asymmetric Gaussian
13.3.5 Generalized bell-shaped Gaussian
13.3.6 Sigmoidal
13.3.7 Singleton
13.4 Fuzzy rules
13.5 Fuzzy inference
13.5.1 Fuzzy or approximate reasoning
13.5.2 Mamdani fuzzy inference system
13.5.2.1 Fuzzification of input
13.5.2.2 Application of fuzzy operators 'AND' or 'OR'
13.5.2.3 Implication from antecedent to consequent
13.5.2.4 Aggregation of consequents across the rules
13.5.2.5 Defuzzification
13.5.3 Takagi-Sugeno-Kang (TSK) fuzzy inference system
13.5.3.1 Clustering
13.5.4 Tsukamoto inference system
13.5.5 Larsen inference system
13.6 Neuro-fuzzy system
13.6.1 Types of neuro-fuzzy systems
13.6.1.1 Umano and Ezawa (1991) fuzzy-neural model
13.7 Adaptive neuro-fuzzy inference systems (ANFIS)
13.7.1 Hybrid learning
13.8 Application areas
13.9 Concluding remarks
References
14 Genetic algorithms (GAs) and genetic programming (GP)
14.1 Introduction
14.2 Coding
14.3 Genetic operators
14.4 Parameters of GA
14.5 Genetic programming (GP)
14.6 Application areas
14.7 Concluding remarks
References
Index
Author
1 Introduction
1.1 Some definitions
1.1.1 System
1.1.2 State of a system
1.2 General systems theory (GST)
1.3 Ecological systems (Ecosystems)
1.4 Equi-finality
1.5 Scope and layout
References
2 Historical development of hydrological modelling
2.1 Basic concepts and governing equation of linear systems
2.1.1 Time domain analysis
2.1.1.1 Types of input functions
2.1.1.2 System response function - convolution integral
2.1.2 Frequency domain analysis
2.1.2.1 Fourier transform - frequency response function (FRF)
2.1.2.2 Laplace transform
2.1.2.3 z-Transform
2.2 Linear systems in hydrological modelling
2.2.1 Hydrological systems
2.2.2 Unit hydrograph
2.2.2.1 Unit hydrograph for a complex storm
2.2.2.2 Instantaneous unit hydrograph (IUH)
2.2.2.3 Empirical unit hydrograph
2.2.2.4 Unit pulse response function
2.2.3 Linear reservoir
2.2.4 Linear cascade
2.2.5 Linear channel
2.2.6 Time-area diagram
2.3 Random processes and linear systems
2.4 Non-linear systems
2.4.1 Determination of the kernel functions
2.5 Multilinear or parallel systems
2.6 Flood routing
2.6.1 Inventory method
2.6.2 Muskingum method
2.6.2.1 Estimation of the routing parameters K and c
2.6.2.2 Limitations of the Muskingum method
2.6.3 Modified Puls method
2.6.4 Muskingum-Cunge method
2.6.5 Hydraulic approach
2.6.5.1 Solution of the St. Venant equations
2.6.5.2 Diffusion wave approximation
2.6.5.3 Kinematic wave approximation
2.7 Reservoir routing
2.8 Rainfall-runoff modelling
2.8.1 Conceptual-type hydrologic models
2.8.1.1 Stanford watershed model (SWM)
2.8.1.2 Tank model
2.8.1.3 HEC series
2.8.1.4 Xinanjiang model
2.8.1.5 Variable infiltration capacity (VIC) model
2.8.2 Physics-based hydrologic models
2.8.2.1 Système Hydrologique Europèen (SHE) model
2.8.3 Data-driven models
2.8.3.1 Why data-driven models?
2.8.3.2 Types of data-driven models
2.9 Guiding principles and criteria for choosing a model
2.10 Challenges in hydrological modelling
2.11 Concluding remarks
References
3 Population dynamics
3.1 Introduction
3.2 Malthusian growth model
3.3 Verhulst growth model
3.4 Predator-prey (Lotka-Volterra) model
3.5 Gompertz curve
3.6 Logistic map
3.6.1 Specific points in the logistic map
3.7 Cell growth
3.7.1 Cell division
3.7.2 Exponential growth
3.7.3 Cell growth models in a batch (closed system) bioreactor
3.8 Bacterial growth
3.8.1 Binary fission
3.8.2 Monod kinetics
3.9 Radioactive decay and carbon dating
3.10 Concluding remarks
References
4 Reaction kinetics
4.1 Introduction
4.2 Michaelis-Menten equation
4.3 Monod equation
4.4 Concluding remarks
References
5 Water quality systems
5.1 Dissolved oxygen systems
5.1.1 Biochemical oxygen demand (BOD)
5.1.2 Nitrification
5.1.3 Denitrification
5.1.4 Oxygen depletion equation in a river due to a single point source of BOD
5.1.5 Reoxygenation coefficient
5.1.6 Deoxygenation coefficient
5.2 Water quality in a completely mixed water body
5.2.1 Governing equations for a completely mixed system
5.2.2 Step function input
5.2.3 Periodic input function
5.2.4 Fourier series input
5.2.5 General harmonic response
5.2.6 Impulse input
5.2.7 Arbitrary input
5.3 Water quality in rivers and streams
5.3.1 Point sources
5.3.2 Distributed sources
5.3.3 Effect of spatial flow variation
5.3.3.1 Exponential spatial flow variation
5.3.4 Unsteady state
5.3.4.1 Non-dispersive systems
5.3.4.2 Dispersive systems
5.3.5 Tidal reaches
5.3.5.1 Special case of no decay
5.3.5.2 Special case of no dispersion
5.4 Concluding remarks
References
6 Longitudinal dispersion
6.1 Introduction
6.2 Governing equations
6.2.1 Some characteristics of turbulent diffusion
6.2.2 Shear flow dispersion
6.2.3 Taylor's approximation
6.2.4 Turbulent mixing coefficients
6.3 Dispersion coefficient
6.3.1 Routing method
6.3.2 Time scale - dimensionless time
6.4 Numerical solution
6.4.1 Finite difference method
6.4.2 Finite element methods
6.4.3 Moving finite elements
6.5 Dispersion through porous media
6.6 General-purpose water quality models
6.6.1 Enhanced Stream Water Quality Model (QUAL2E)
6.6.2 Water Quality Analysis Simulation Programme (WASP)
6.6.3 One Dimensional Riverine Hydrodynamic and Water Quality Model (EPD-RIV1)
6.7 Concluding remarks
References
7 Time series analysis and forecasting
7.1 Introduction
7.2 Basic properties of a time series
7.2.1 Stationarity
7.2.2 Ergodicity
7.2.3 Homogeneity
7.3 Statistical parameters of a time series
7.3.1 Sample moments
7.3.2 Moving averages - low-pass filtering
7.3.3 Differencing - high-pass filtering
7.3.4 Recursive means and variances
7.4 Tests for stationarity
7.5 Tests for homogeneity
7.5.1 von Neumann ratio
7.5.2 Cumulative deviations
7.5.3 Bayesian statistics
7.5.4 Ratio test
7.5.5 Pettit test
7.6 Components of a time series
7.7 Trend analysis
7.7.1 Tests for randomness and trend
7.7.1.1 Turning point test for randomness
7.7.1.2 Kendall's rank correlation test (t test)
7.7.1.3 Regression test for linear trend
7.7.1.4 Mann-Kendall test
7.7.2 Trend removal
7.7.2.1 Splines
7.8 Periodicity
7.8.1 Harmonic analysis - cumulative periodogram
7.8.2 Autocorrelation analysis
7.8.3 Spectral analysis
7.8.3.1 Hanning method (after J. von Hann)
7.8.3.2 Hamming method (after R.W. Hamming
1983)
7.8.3.3 Lag window method (after Tukey
1965)
7.8.4 Cross correlation
7.8.5 Cross-spectral density function
7.9 Stochastic component
7.9.1 Autoregressive (AR) models
7.9.1.1 Properties of autoregressive models
7.9.1.2 Estimation of parameters
7.9.1.3 First-order model (lag-one Markov model)
7.9.1.4 Second-order model (lag-two model)
7.9.1.5 Partial autocorrelation function (PAF)
7.9.2 Moving average (MA) models
7.9.2.1 Properties of MA models
7.9.2.2 Parameters of MA models
7.9.2.3 MA(1) model
7.9.2.4 MA(2) model
7.9.3 Autoregressive moving average (ARMA) models
7.9.3.1 Properties of ARMA(p
q) models
7.9.3.2 ARMA(1
1) model
7.9.4 Backshift operator
7.9.5 Difference operator
7.9.6 Autoregressive integrated moving average (ARIMA) models
7.10 Residual series
7.10.1 Test of independence
7.10.2 Test of normality
7.10.3 Other distributions
7.10.4 Test for parsimony
7.10.4.1 Akaike information criterion (AIC) and Bayesian information criterion (BIC)
7.10.4.2 Schwartz Bayesian criterion (SBC)
7.11 Forecasting
7.11.1 Minimum mean square error type difference equation
7.11.2 Confidence limits
7.11.3 Forecast errors
7.11.4 Numerical examples of forecasting
7.12 Synthetic data generation
7.13 ARMAX modelling
7.14 Kalman filtering
7.15 Parameter estimation
7.16 Applications
7.17 Concluding remarks
Appendix 7.1: Fourier series representation of a periodic function
References
8 Artificial neural networks
8.1 Introduction
8.2 Origin of artificial neural networks
8.2.1 Biological neuron
8.2.2 Artificial neuron
8.2.2.1 Bias/threshold
8.3 Unconstrained optimization techniques
8.3.1 Method of steepest descent
8.3.2 Newton's method (quadratic approximation)
8.3.3 Gauss-Newton method
8.3.4 LMS algorithm
8.4 Perceptron
8.4.1 Linear separability
8.4.2 'AND'
'OR'
and 'XOR' operations
8.4.3 Multilayer perceptron (MLP)
8.4.4 Optimal structure of an MLP
8.5 Types of activation function
8.5.1 Linear activation function (unbounded)
8.5.2 Saturating activation function (bounded)
8.5.3 Symmetric saturating activation function (bounded)
8.5.4 Positive linear activation function
8.5.5 Hardlimiter (Heaviside function; McCulloch- Pitts model) activation function
8.5.6 Symmetric hardlimiter activation function
8.5.7 Signum function
8.5.8 Triangular activation function
8.5.9 Sigmoid logistic activation function
8.5.10 Sigmoid hyperbolic tangent function
8.5.11 Radial basis functions
8.5.11.1 Multiquadratic
8.5.11.2 Inverse multiquadratic
8.5.11.3 Gaussian
8.5.11.4 Polyharmonic spline function
8.5.11.5 Thin plate spline function
8.5.12 Softmax activation function
8.6 Types of artificial neural networks
8.6.1 Feed-forward neural networks
8.6.2 Recurrent neural networks
8.6.2.1 Back-propagation through time (BPTT)
8.6.3 Self-organizing maps (Kohonen networks)
8.6.4 Product unit-based neural networks (PUNN)
8.6.4.1 Generation of the initial population
8.6.4.2 Fitness function
8.6.4.3 Parametric mutation
8.6.4.4 Structural mutation
8.6.5 Wavelet neural networks
8.7 Learning modes and learning
8.7.1 Learning modes
8.7.2 Types of learning
8.7.2.1 Error correction learning (optimum filtering)
8.7.2.2 Memory-based learning
8.7.2.3 Hebbian learning (Hebb
1949) (unsupervised)
8.7.2.4 Competitive learning (unsupervised)
8.7.2.5 Boltzmann learning
8.7.2.6 Reinforced learning (unsupervised)
8.7.2.7 Hybrid learning
8.7.3 Learning rate (.) and momentum term (a)
8.8 BP algorithm
8.8.1 Generalized delta rule
8.9 ANN implementation details
8.9.1 Data preprocessing: Principal Component Analysis (PCA)
8.9.1.1 Eigenvalue decomposition
8.9.1.2 Deriving the new data set
8.9.2 Data normalization
8.9.3 Choice of input variables
8.9.4 Heuristics for implementation of BP
8.9.5 Stopping criteria
8.9.6 Performance criteria
8.10 Feedback Systems
8.11 Problems and limitations
8.12 Application areas
8.12.1 Hydrological applications
8.12.1.1 River discharge prediction
8.12.2 Environmental applications
8.12.2.1 Algal bloom prediction
Hong Kong
8.13 Concluding remarks
References
9 Radial basis function (RBF) neural networks
9.1 Introduction
9.2 Interpolation
9.3 Regularization
9.4 Generalized RBFs
9.5 Normalized radial basis functions (NRBFs) and kernel regression
9.6 Learning of RBFs
9.6.1 Fixed centres selection (random)
9.6.2 Forward selection
9.6.3 Orthogonal least squares (OLS) algorithm
9.6.3.1 Regularized orthogonal least squares (ROLS) algorithm
9.6.4 Self-organized selection of centres
9.6.5 Supervised selection of centres
9.6.6 Selection of centres using the concept of generalized degrees of freedom
9.6.6.1 Training of RBF networks
9.6.6.2 Computational procedure
9.6.7 Other methods of learning
9.7 Curse of dimensionality
9.8 Performance criteria
9.9 Comparison of MLP versus RBF networks
9.10 Applications
9.11 Concluding remarks
References
10 Fractals and chaos
10.1 Introduction
10.2 Fractal dimensions
10.2.1 Topological dimension
10.2.2 Fractal dimension
10.2.3 Hausdorff dimension
10.2.4 Box-counting dimension
10.2.5 Similarity dimension
10.2.6 Packing dimension
10.2.7 Information dimension
10.2.8 Capacity dimension
10.2.9 Rényi dimension
10.2.10 Correlation dimension
10.3 Examples of some well-known fractals
10.3.1 Cantor set
10.3.2 Sierpinski (gasket) triangle
10.3.3 Koch curve
10.3.4 Koch snowflake (or Koch star)
10.3.5 Mandelbrot set
10.3.6 Julia set
10.4 Perimeter-area relationship of fractals
10.5 Chaos
10.5.1 Butterfly effect
10.5.2 The n-body problem
10.6 Some definitions
10.6.1 Metric space
10.6.2 Manifold
10.6.3 Map
10.6.4 Attractor
10.6.4.1 Strange attractor
10.6.5 Dynamical system
10.6.6 Phase (or state) space
10.7 Invariants of chaotic systems
10.7.1 Lyapunov exponent
10.7.2 Entropy of a dynamical system
10.7.2.1 Kolmogorov-Sinai (K-S) entropy
10.7.2.2 Modified correlation entropy
10.7.2.3 K-S entropy and the Lyapunov spectrum
10.8 Examples of known chaotic attractors
10.8.1 Logistic map
10.8.1.1 Bifurcation
10.8.2 Hénon map
10.8.3 Lorenz map
10.8.4 Duffing equation
10.8.5 Rössler equations
10.8.6 Chua's equation
10.9 Applications areas of chaos
10.10 Concluding remarks
References
11 Dynamical systems approach of modelling
11.1 Introduction
11.2 Random versus chaotic deterministic systems
11.3 Time series as a dynamical system
11.3.1 Dynamical system
11.3.2 Sensitivity to initial conditions
11.4 Embedding
11.4.1 Embedding theorem
11.4.2 Embedding dimension
11.4.2.1 False nearest neighbour (FNN) method
11.4.2.2 Singular value decomposition (SVD)
11.4.3 Delay time
11.4.3.1 Average mutual information
11.4.4 Irregular embeddings
11.5 Phase (or state) space reconstruction
11.6 Phase space prediction
11.7 Inverse problem
11.7.1 Prediction error
11.8 Non-linearity and determinism
11.8.1 Test for non-linearity
11.8.1.1 Significance
11.8.1.2 Test statistics
11.8.1.3 Method of surrogate data
11.8.1.4 Null hypotheses
11.8.2 Test for determinism
11.9 Noise and noise reduction
11.9.1 Noise in data
11.9.2 Noise reduction
11.9.3 Noise level
11.10 Application areas
11.11 Concluding remarks
Appendices
Appendix 11.1: Derivation of Equation 11.81
Appendix 11.2: Proof of Equation 11.82b
Appendix 11.3: Proof of Equation A1-4
References
12 Support vector machines
12.1 Introduction
12.2 Linearly separable binary classificatio
12.3 Soft-margin binary classification
12.3.1 Linear soft margin
12.3.2 Non-linear classification
12.4 Support vector regression
12.4.1 Linear support vector regression
12.4.2 Non-linear support vector regression
12.5 Parameter selection
12.6 Kernel tricks
12.7 Quadratic programming
12.8 Limitations and problems
12.9 Application areas
12.10 Concluding remarks
Appendix 12.1: Statistical learning
Empirical risk minimization (ERM)
Structural risk minimization (SRM)
Appendix 12.2: Karush-Kuhn-Tucker (KKT) conditions
References
13 Fuzzy logic systems
13.1 Introduction
13.2 Fuzzy sets and fuzzy operations
13.2.1 Fuzzy sets
13.2.2 Logical operators AND
OR
and NOT
13.2.2.1 Intersection
13.2.2.2 Union
13.2.2.3 Other useful definitions
13.2.3 Linguistic variables
13.3 Membership functions
13.3.1 Triangular
13.3.2 Trapezoidal
13.3.3 Gaussian
13.3.4 Asymmetric Gaussian
13.3.5 Generalized bell-shaped Gaussian
13.3.6 Sigmoidal
13.3.7 Singleton
13.4 Fuzzy rules
13.5 Fuzzy inference
13.5.1 Fuzzy or approximate reasoning
13.5.2 Mamdani fuzzy inference system
13.5.2.1 Fuzzification of input
13.5.2.2 Application of fuzzy operators 'AND' or 'OR'
13.5.2.3 Implication from antecedent to consequent
13.5.2.4 Aggregation of consequents across the rules
13.5.2.5 Defuzzification
13.5.3 Takagi-Sugeno-Kang (TSK) fuzzy inference system
13.5.3.1 Clustering
13.5.4 Tsukamoto inference system
13.5.5 Larsen inference system
13.6 Neuro-fuzzy system
13.6.1 Types of neuro-fuzzy systems
13.6.1.1 Umano and Ezawa (1991) fuzzy-neural model
13.7 Adaptive neuro-fuzzy inference systems (ANFIS)
13.7.1 Hybrid learning
13.8 Application areas
13.9 Concluding remarks
References
14 Genetic algorithms (GAs) and genetic programming (GP)
14.1 Introduction
14.2 Coding
14.3 Genetic operators
14.4 Parameters of GA
14.5 Genetic programming (GP)
14.6 Application areas
14.7 Concluding remarks
References
Index