• Produktbild: Numerical Bayesian Methods Applied to Signal Processing
  • Produktbild: Numerical Bayesian Methods Applied to Signal Processing
- 12%

Numerical Bayesian Methods Applied to Signal Processing

Aus der Reihe Statistics and Computing
12% sparen

187,99 € UVP 213,99 €

inkl. MwSt, Versandkostenfrei

Lieferung nach Hause

Beschreibung

Details

Einband

Gebundene Ausgabe

Erscheinungsdatum

23.02.1996

Abbildungen

XIV, w. mit 118 Illustrationen 24,5 cm

Verlag

Springer Us

Seitenzahl

244

Maße (L/B/H)

23,5/15,5/2 cm

Gewicht

565 g

Auflage

1996

Sprache

Englisch

ISBN

978-0-387-94629-0

Beschreibung

Details

Einband

Gebundene Ausgabe

Erscheinungsdatum

23.02.1996

Abbildungen

XIV, w. mit 118 Illustrationen 24,5 cm

Verlag

Springer Us

Seitenzahl

244

Maße (L/B/H)

23,5/15,5/2 cm

Gewicht

565 g

Auflage

1996

Sprache

Englisch

ISBN

978-0-387-94629-0

Herstelleradresse

Springer-Verlag GmbH
Tiergartenstr. 17
69121 Heidelberg
DE

Email: ProductSafety@springernature.com

Weitere Bände von Statistics and Computing

Weitere Bände von Statistics and Computing

Unsere Kundinnen und Kunden meinen

0 Bewertungen

Informationen zu Bewertungen

Zur Abgabe einer Bewertung ist eine Anmeldung im Konto notwendig. Die Authentizität der Bewertungen wird von uns nicht überprüft. Wir behalten uns vor, Bewertungstexte, die unseren Richtlinien widersprechen, entsprechend zu kürzen oder zu löschen.

Verfassen Sie die erste Bewertung zu diesem Artikel

Helfen Sie anderen Kund*innen durch Ihre Meinung

Unsere Kundinnen und Kunden meinen

0 Bewertungen filtern

  • Produktbild: Numerical Bayesian Methods Applied to Signal Processing
  • Produktbild: Numerical Bayesian Methods Applied to Signal Processing
  • 1 Introduction.- 2 Probabilistic Inference in Signal Processing.- 2.1 Introduction.- 2.2 The likelihood function.- 2.2.1 Maximum likelihood.- 2.3 Bayesian data analysis.- 2.4 Prior probabilities.- 2.4.1 Flat priors.- 2.4.2 Smoothness priors.- 2.4.3 Convenience priors.- 2.5 The removal of nuisance parameters.- 2.6 Model selection using Bayesian evidence.- 2.6.1 Ockham’s razor.- 2.7 The general linear model.- 2.8 Interpretations of the general linear model.- 2.8.1 Features.- 2.8.2 Orthogonalization.- 2.9 Example of marginalization.- 2.9.1 Results.- 2.10 Example of model selection.- 2.10.1 Closed form expression for evidence.- 2.10.2 Determining the order of a polynomial.- 2.10.3 Determining the order of an AR process.- 2.11 Concluding remarks.- 3 Numerical Bayesian Inference.- 3.1 The normal approximation.- 3.1.1 Effect of number of data on the likelihood function.- 3.1.2 Taylor approximation.- 3.1.3 Reparameterization.- 3.1.4 Jacobian of transformation.- 3.1.5 Normal approximation to evidence.- 3.1.6 Normal approximation to the marginal density.- 3.1.7 The delta method.- 3.2 Optimization.- 3.2.1 Local algorithms.- 3.2.2 Global algorithms.- 3.2.3 Concluding remarks.- 3.3 Integration.- 3.4 Numerical quadrature.- 3.4.1 Multiple integrals.- 3.5 Asymptotic approximations.- 3.5.1 The saddlepoint approximation and Edgeworth series.- 3.5.2 The Laplace approximation.- 3.5.3 Moments and expectations.- 3.5.4 Marginalization.- 3.6 The Monte Carlo method.- 3.7 The generation of random variates.- 3.7.1 Uniform variates.- 3.7.2 Non-uniform variates.- 3.7.3 Transformation of variables.- 3.7.4 The rejection method.- 3.7.5 Other methods.- 3.8 Evidence using importance sampling.- 3.8.1 Choice of sampling density.- 3.8.2 Orthogonalization using noise colouring.- 3.9 Marginal densities.- 3.9.1 Histograms.- 3.9.2 Jointly distributed variates.- 3.9.3 The dummy variable method.- 3.9.4 Marginalization using jointly distributed variates.- 3.10 Opportunities for variance reduction.- 3.10.1 Quasi-random sequences.- 3.10.2 Antithetic variates.- 3.10.3 Control variates.- 3.10.4 Stratified sampling.- 3.11 Summary.- 4 Markov Chain Monte Carlo Methods.- 4.1 Introduction.- 4.2 Background on Markov chains.- 4.3 The canonical distribution.- 4.3.1 Energy, temperature and probability.- 4.3.2 Random walks.- 4.3.3 Free energy and model selection.- 4.4 The Gibbs sampler.- 4.4.1 Description.- 4.4.2 Discussion.- 4.4.3 Convergence.- 4.5 The Metropolis-Hastings algorithm.- 4.5.1 The general algorithm.- 4.5.2 Convergence.- 4.5.3 Choosing the proposal density.- 4.5.4 Relationship between Gibbs and Metropolis.- 4.6 Dynamical sampling methods.- 4.6.1 Derivation.- 4.6.2 Hamiltonian dynamics.- 4.6.3 Stochastic transitions.- 4.6.4 Simulating the dynamics.- 4.6.5 Hybrid Monte Carlo.- 4.6.6 Convergence to canonical distribution.- 4.7 Implementation of simulated annealing.- 4.7.1 Annealing schedules.- 4.7.2 Annealing with Markov chains.- 4.8 Other issues.- 4.8.1 Assessing convergence of Markov chains.- 4.8.2 Determining the variance of estimates.- 4.9 Free energy estimation.- 4.9.1 Thermodynamic integration.- 4.9.2 Other methods.- 4.10 Summary.- 5 Retrospective Changepoint Detection.- 5.1 Introduction.- 5.2 The simple Bayesian step detector.- 5.2.1 Derivation of the step detector.- 5.2.2 Application of the step detector.- 5.3 The detection of changepoints using the general linear model.- 5.3.1 The general piecewise linear model.- 5.3.2 Simple step detector in generalized matrix form.- 5.3.3 Changepoint detection in AR models.- 5.3.4 Application of AR changepoint detector.- 5.4 Recursive Bayesian estimation.- 5.4.1 Update of position.- 5.4.2 Update given more data.- 5.5 Detection of multiple changepoints.- 5.6 Implementation details.- 5.6.1 Sampling changepoint space.- 5.6.2 Sampling linear parameter space.- 5.6.3 Sampling noise parameter space.- 5.7 Multiple changepoint results.- 5.7.1 Synthetic step data.- 5.7.2 Well log data.- 5.8 Concluding Remarks.- 6 Restoration of Missing Samples in Digital Audio Signals.- 6.1 Introduction.- 6.2 Model formulation.- 6.2.1 The likelihood and the excitation energy.- 6.2.2 Maximum likelihood.- 6.3 The EM algorithm.- 6.3.1 Expectation.- 6.3.2 Maximization.- 6.4 Gibbs sampling.- 6.4.1 Description.- 6.4.2 Derivation of conditional densities.- 6.4.3 Conditional density for the missing data.- 6.4.4 Conditional density for the autoregressive parameters.- 6.4.5 Conditional density for the standard deviation.- 6.5 Implementation issues.- 6.5.1 Estimating AR parameters.- 6.5.2 Implementing the ML algorithm.- 6.5.3 Implementing the EM algorithm.- 6.5.4 Implementation of Gibbs sampler.- 6.6 Relationship between the three restoration methods.- 6.6.1 ML vs Gibbs.- 6.6.2 Gibbs vs EM.- 6.6.3 EM vs ML.- 6.7 Simulations.- 6.7.1 Autoregressive model with poles near unit circle.- 6.7.2 Autoregressive model with poles near origin.- 6.7.3 Sine wave.- 6.7.4 Evolution of sample interpolants.- 6.7.5 Hairy sine wave.- 6.7.6 Real data: Tuba.- 6.7.7 Real data: Sinéad O’Connor.- 6.8 Discussion.- 6.8.1 The temperature of an interpolant.- 6.8.2 Data augmentation.- 6.9 Concluding remarks.- 6.9.1 Typical interpolants.- 6.9.2 Computation.- 6.9.3 Modelling issues.- 7 Integration in Bayesian Data Analysis.- 7.1 Polynomial data.- 7.1.1 Polynomial data.- 7.1.2 Sampling the joint density.- 7.1.3 Approximate evidence.- 7.1.4 Approximate marginal densities.- 7.1.5 Conclusion.- 7.2 Decay problem.- 7.2.1 The Lanczos problem.- 7.2.2 Biomedical data.- 7.2.3 Concluding remarks.- 7.3 General model selection.- 7.3.1 Model selection in an impulsive noise environment.- 7.3.2 Model selection in a Gaussian noise environment.- 7.4 Summary.- 8 Conclusion.- 8.1 A review of the work.- 8.2 Further work.- A The General Linear Model.- A.1 Integrating out model amplitudes.- A.1.1 Least squares.- A.1.2 Orthogonalization.- A.2 Integrating out the standard deviation.- A.3 Marginal density for a linear coefficient.- A.4 Marginal density for standard deviation.- A.5 Conditional density for a linear coefficient.- A.6 Conditional density for standard deviation.- B Sampling from a Multivariate Gaussian Density.- C Hybrid Monte Carlo Derivations.- C.1 Full Gaussian likelihood.- C.2 Student-t distribution.- C.3 Remark.- D EM Algorithm Derivations.- D.l Expectation.- D.2 Maximization.- E Issues in Sampling Based Approaches to Integration.- E.1 Marginalizing using the conditional density.- E.2 Approximating the conditional density.- E.3 Gibbs sampling from the joint density.- E.4 Reverse importance sampling.- F Detailed Balance.- F.1 Detailed balance in the Gibbs sampler.- F.2 Detailed balance in the Metropolis Hastings algorithm..- F.3 Detailed balance in the Hybrid Monte Carlo algorithm..- F.4 Remarks.- References.