![Generalized additive modelling with implicit variable selection by likelihood based boosting](https://cdn.podcastcms.de/images/shows/315/2444607/s/623846263/generalized-additive-modelling-with-implicit-variable-selection-by-likelihood-based-boosting.png)
Generalized additive modelling with implicit variable selection by likelihood based boosting
Beschreibung
vor 21 Jahren
The use of generalized additive models in statistical data analysis
suffers from the restriction to few explanatory variables and the
problems of selection of smoothing parameters. Generalized additive
model boosting circumvents these problems by means of stagewise
fitting of weak learners. A fitting procedure is derived which
works for all simple exponential family distributions, including
binomial, Poisson and normal response variables. The procedure
combines the selection of variables and the determination of the
appropriate amount of smoothing. As weak learners penalized
regression splines and the newly introduced penalized stumps are
considered. Estimates of standard deviations and stopping criteria
which are notorious problems in iterative procedures are based on
an approximate hat matrix. The method is shown to outperform common
procedures for the fitting of generalized additive models. In
particular in high dimensional settings it is the only method that
works properly.
suffers from the restriction to few explanatory variables and the
problems of selection of smoothing parameters. Generalized additive
model boosting circumvents these problems by means of stagewise
fitting of weak learners. A fitting procedure is derived which
works for all simple exponential family distributions, including
binomial, Poisson and normal response variables. The procedure
combines the selection of variables and the determination of the
appropriate amount of smoothing. As weak learners penalized
regression splines and the newly introduced penalized stumps are
considered. Estimates of standard deviations and stopping criteria
which are notorious problems in iterative procedures are based on
an approximate hat matrix. The method is shown to outperform common
procedures for the fitting of generalized additive models. In
particular in high dimensional settings it is the only method that
works properly.
Weitere Episoden
![Global permutation tests for multivariate ordinal data: alternatives, test statistics, and the null dilemma](https://cdn.podcastcms.de/images/shows/66/2444607/s/623847222/global-permutation-tests-for-multivariate-ordinal-data-alternatives-test-statistics-and-the-null-dilemma.png)
![Clustering in linear mixed models with approximate Dirichlet process mixtures using EM algorithm](https://cdn.podcastcms.de/images/shows/66/2444607/s/623847217/clustering-in-linear-mixed-models-with-approximate-dirichlet-process-mixtures-using-em-algorithm.png)
![Variable selection with Random Forests for missing data](https://cdn.podcastcms.de/images/shows/66/2444607/s/623847206/variable-selection-with-random-forests-for-missing-data.png)
vor 12 Jahren
In Podcasts werben
Kommentare (0)