Bridging Domains: Infusing Financial, Privacy, and Software Best Practices into ML Risk Management
Responsible AI
"Aviation laws were written in blood. Let's not reproduce that methodology with AI" – Siméon Campos
In 2018, Bloomberg's story "Zillow's Algorithm-Fueled Buying Spree Doomed Its Home-Flipping Experiment" made quite a headline. It outlined Zillow's daring entry into the iBuying world, betting on its ML-powered Zestimate algorithm to revolutionize home flipping for profit. Despite a carefully structured start, incorporating local real estate experts to authenticate the algorithm's pricing, Zillow shifted to a fully algorithmic approach in the quest for faster offers. This move, however, did not pay off.
The Zestimate struggled to adapt to the swift inflation in the 2021 real estate market, prompting Zillow to take action to enhance the appeal of its offers. The company embarked on an ambitious buying spree, reportedly acquiring as many as 10,000 homes per quarter. However, the human workforce struggled to keep up with the sheer scale and speed of these acquisitions, a challenge exacerbated by the concurrent outbreak of the pandemic. In the face of mounting difficulties, including a backlog of unsold properties, Zillow decided to halt its offers in October 2021. Subsequent months witnessed homes being resold at a loss, leading to a substantial inventory write-down exceeding $500 million.
In addition to the huge monetary loss of its failed venture, Zillow announced that it would lay off about 2,000 employees – a full quarter of the company.
We initiate our discussion with a rather unfortunate incident, as the fall of Zillow's iBuying venture is embedded within a complex framework of causes. Although it's impossible to extricate this incident from the global pandemic of 2020 that disrupted the housing market, it certainly paves the way for a rich analysis. In this article, we'll use this as an example and shine a light on how the principles of governance and risk management discussed in our series could possibly avert such unfortunate debacles in the future.
Before you read further
Before proceeding, know that this is the third article in our AI Risk Management series. It's recommended to read the first two articles for a complete understanding.
• The first article unfurls the Cultural Competencies for Machine Learning Risk Management, exploring the human dimensions required to navigate this intricate domain.
Cultural Competencies for Machine Learning Risk Management
• The second article pivots the focus to another vital element within the context of ML systems: Organizational Processes. Embark on this enlightening journey with us for a robust grasp on managing the intertwined realms of AI and risk management.
Organizational Processes for Machine Learning Risk Management
Going beyond Model Risk Management
In the previous article, we discussed in detail how Machine Learning Risk Management (MRM) constitutes a comprehensive framework, along with a series of procedures aimed at identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and operation of machine learning systems. In this part, we will explore various strategies and practices beyond the realm of traditional Model Risk Management that prove to be exceptionally beneficial, especially concerning ML safety. We will commence by discussing the AI incident response.