9 Less interpretable methods

Neural networks and ensemble methods like bagging, random forests, and boosting can greatly increase predictive accuracy at the cost of ease of interpretation.

Joshua Loftus
10-03-2021

Materials

Link Type Description
html pdf Slides Tree-based methods
Rmd Notebook Basics of tree algorithms

To be updated

Trees and forests

Compositional nonlinearity

(not active yet) Slides, notebooks, exercises

Slides for (tree) ensembles ([PDF])

Slides for deep learning ([PDF])

Notebook for tree splitting

Reuse

Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

Citation

For attribution, please cite this work as

Loftus (2021, Oct. 3). machine learning 4 data science: 9 Less interpretable methods. Retrieved from http://ml4ds.com/weeks/09-trees/

BibTeX citation

@misc{loftus20219,
  author = {Loftus, Joshua},
  title = {machine learning 4 data science: 9 Less interpretable methods},
  url = {http://ml4ds.com/weeks/09-trees/},
  year = {2021}
}