Module 5: Machine Learning Basics

Topic 3: Evaluating ML

Performance Metrics

How can you train a ML method to be “good”?  What does “good” even mean?  This turns out to be a really complicated question, especially for real-world models! For this module, we will jump into a discussion of performance metrics, or ways in which you can objectively evaluate your model so that you can say if it is “good” or not. 

  • Copy of the slides
  • Please read more about the MANY different ways to evaluate models at this webpage (this is the one referenced in the video) 
  • AI2ES had an excellent discussion about ML verification recently.  If you go to this link, the 2nd talk (30 min) focuses on verification of models.  The slides and full links to all talks are here.

Learning curves

Once you have a set of training/validation/testing data and a performance metric, you can actually plot your performance as your model trains so that you can see if things are working (or, sometimes, not working!).  Watch the video to learn more!