Module 7: Reinforcement Learning


Topic 5: RL Ethics

Ethics of acting in the real-world

One of the really useful aspects of RL is that it can learn from interaction with the real-world and in real-time.  However, one of the downsides is that it can require many tries before a decent solution is found. Once a solution is found, if the world is non-stationary (which is true in the real-world), you need to continue to explore occasionally or you may not always find the optimal policy.  The question is:  do you want your car or your airplane taking exploratory actions?  Most likley no!!  

There are people working on safety aware AI/ML.  Such research is focused on ensuring that the methods are mathematically constrained to safe regions of the space, something that is critical for many real-world systems.  For this topic, I’m going down a slightly different path and we will explore the National Institutes of Standards and Technology (NIST)’s Risk Management Framework for AI.

We will start by exploring the NIST trustworthy & responsible AI resource center.  Please click on that URL and explore, looking at the resources, including downloading the RMF itself and reading it.

Next I would like you watch one of the talks about the RMF.  Because there is a live talk coming up just a week after this module is live, I’m going to extend the due date to include this live talk as the recorded talks below.

Since the live talk is a week later than I would normally make this due, the online grading declaration will be due right after that talk.  If you watch one of the recorded talks instead, simply turn in your grading declaration earlier!