Optimal Bayesian Classification, 最適ベイズ分類, 9781510630697, 978-1-5106-3069-7

Optimal Bayesian Classification

学術書籍  >  理工学  >  OR・最適化  > 




Optimal Bayesian Classification

19,470(税込)

数量

書名

Optimal Bayesian Classification
最適ベイズ分類
著者・編者 Dalton, L.A. & Dougherty, E.R.
発行元 SPIE
発行年/月 2020年3月   
装丁 Softcover
ページ数 362 ページ
ISBN 978-1-5106-3069-7
発送予定 海外倉庫よりお取り寄せ 3-5週間以内に発送します

Description

The most basic problem of engineering is the design of optimal operators. Design takes different forms depending on the random process constituting the scientific model and the operator class of interest. For classification, the random process is a feature-label distribution, and a Bayes classifier minimizes classification error. Rarely do we know the feature-label distribution or have sufficient data to estimate it. To best use available knowledge and data, this book takes a Bayesian approach to modeling the feature-label distribution and designs an optimal classifier relative to a posterior distribution governing an uncertainty class of feature-label distributions. The origins of this approach lie in estimating classifier error when there are insufficient data to hold out test data, in which case an optimal error estimate can be obtained relative to the uncertainty class. A natural next step is to forgo classical ad hoc classifier design and find an optimal classifier relative to the posterior distribution over the uncertainty class?this being an optimal Bayesian classifier.


 

Contents:

1 Classification and Error Estimation
1.1 Classifiers
1.2 Constrained Classifiers
1.3 Error Estimation
1.4 Random Versus Separate Sampling
1.5 Epistemology and Validity
1.5.1 RMS bounds
1.5.2 Error RMS in the Gaussian model

2 Optimal Bayesian Error Estimation
2.1 The Bayesian MMSE Error Estimator
2.2 Evaluation of the Bayesian MMSE Error Estimator
2.3 Performance Evaluation at a Fixed Point
2.4 Discrete Model
2.4.1 Representation of the Bayesian MMSE error estimator
2.4.2 Performance and robustness in the discrete model
2.5 Gaussian Model
2.5.1 Independent covariance model
2.5.2 Homoscedastic covariance model
2.5.3 Effective class-conditional densities
2.5.4 Bayesian MMSE error estimator for linear classification
2.6 Performance in the Gaussian Model with LDA
2.6.1 Fixed circular Gaussian distributions
2.6.2 Robustness to falsely assuming identity covariances
2.6.3 Robustness to falsely assuming Gaussianity
2.6.4 Average performance under proper priors
2.7 Consistency of Bayesian Error Estimation
2.7.1 Convergence of posteriors
2.7.2 Sufficient conditions for consistency
2.7.3 Discrete and Gaussian models
2.8 Calibration
2.8.1 MMSE calibration function
2.8.2 Performance with LDA
2.9 Optimal Bayesian ROC-based Analysis
2.9.1 Bayesian MMSE FPR and TPR estimation
2.9.2 Bayesian MMSE ROC and AUC estimation
2.9.3 Performance study

3 Sample-Conditioned MSE of Error Estimation
3.1 Conditional MSE of Error Estimators
3.2 Evaluation of the Conditional MSE
3.3 Discrete Model
3.4 Gaussian Model
3.4.1 Effective joint class-conditional densities
3.4.2 Sample-conditioned MSE for linear classification
3.4.3 Closed-form expressions for functions I and R
3.5 Average Performance in the Gaussian Modell
3.6 Convergence of the Sample-Conditioned MSE
3.7 A Performance Bound for the Discrete Model
3.8 Censored Sampling
3.8.1 Gaussian model
3.9 Asymptotic Approximation of the RMS
3.9.1 Bayesian?Kolmogorov asymptotic conditions
3.9.2 Conditional expectation
3.9.3 Unconditional expectation
3.9.4 Conditional second moments
3.9.5 Unconditional second moments
3.9.6 Unconditional MSE

4 Optimal Bayesian Classification
4.1 Optimal Operator Design Under Uncertainty
4.2 Optimal Bayesian Classifier
4.3 Discrete Model
4.4 Gaussian Model
4.4.1 Both covariances known
4.4.2 Both covariances diagonal
4.4.3 Both covariances scaled identity or general
4.4.4 Mixed covariance models
4.4.5 Average performance in the Gaussian model
4.5 Transformations of the Feature Space
4.6 Convergence of the Optimal Bayesian Classifier
4.7 Robustness in the Gaussian Model
4.7.1 Falsely assuming homoscedastic covariances
4.7.2 Falsely assuming the variance of the features
4.7.3 Falsely assuming the mean of a class
4.7.4 Falsely assuming Gaussianity under Johnson distributions
4.8 Intrinsically Bayesian Robust Classifiers
4.9 Missing Values
4.9.1 Computation for application
4.10 Optimal Sampling
4.10.1 MOCU-based optimal experimental design
4.10.2 MOCU-based optimal sampling
4.11 OBC for Autoregressive Dependent Sampling
4.11.1 Prior and posterior distributions for VAR processes
4.11.2 OBC for VAR processes

5 Optimal Bayesian Risk-based Multi-class Classification
5.1 Bayes Decision Theory
5.2 Bayesian Risk Estimation
5.3 Optimal Bayesian Risk Classification
5.4 Efficient Computation
5.5 Efficient Computation
5.6 Evaluation of Posterior Mixed Moments: Discrete Model
5.7 Evaluation of Posterior Mixed Moments: Gaussian Models
5.7.1 Known covariance
5.7.2 Homoscedastic general covariance
5.7.3 Independent general covariance
5.8 Simulations

6 Optimal Bayesian Transfer Learning
6.1 Joint Prior Distribution
6.2 Posterior Distribution in the Target Domain
6.3 Optimal Bayesian Transfer Learning Classifier
6.3.1 OBC in the target domain
6.4 OBTLC with Negative Binomial Distribution

7 Construction of Prior Distributions
7.1 Prior Construction Using Data from Discarded Features
7.2 Prior Knowledge from Stochastic Differential Equations
7.2.1 Binary classification of Gaussian processes
7.2.2 SDE prior knowledge in the BCGP model
7.3 Maximal Knowledge-Driven Information Prior
7.3.1 Conditional probabilistic constraints
7.3.2 Dirichlet prior distribution
7.4 REMLP for a Normal-Wishart Prior
7.4.1 Pathway knowledge
7.4.2 REMLP optimization
7.4.3 Application of a normal-Wishart prior
7.4.4 Incorporating regulation types
7.4.5 A synthetic example