0
0

Virtual Hearing - Equitable Algorithms: How Human-Centered AI Can Address Systemic Racism and...

Views

 

3876

5/7/2021, 5:35 PM

Video Description

Connect with the House Financial Services Committee Get the latest news: https://financialservices.house.gov/ Follow us on Facebook: https://www.facebook.com/HouseFinancialCmte Follow us on Twitter: https://twitter.com/FSCDems ___________________________________ On Friday, May 7, 2021, at 12:00 p.m. (ET) Task Force on Artificial Intelligence Chairman Foster (IL-11) and Ranking Member Gonzalez (OH-16) will host a virtual hearing entitled, “Equitable Algorithms: How Human-Centered AI Can Address Systemic Racism and Racial Justice." - - - - - - - - Witnesses for this one-panel hearing will be: • Stephen Hayes, Partner, Relman Colfax PLLC • Melissa Koide, CEO, FinRegLab • Lisa Rice, President and CEO, National Fair Housing Alliance • Kareem Saleh, Founder, FairPlay AI • Dave Girouard, Founder & CEO, Upstart Overview The expanded use of Artificial Intelligence (AI), Machine Learning (ML), and related emerging technologies in the financial services and housing sectors in recent years has raised concerns with racial biases embedded in these technologies. Although the use of AI lending models has the potential to expand opportunities and reduce discrimination, companies using AI and ML algorithms and datasets increasingly have to consider and test for underlying historical bias, lest they produce the unintended consequence of exacerbating systemic racism due to the use of automated activities in the financial and housing sectors. Prior hearings in the 116th Congress held by this Task Force identified algorithmic biases as considerable threats, requiring detailed policy and technical solutions. This hearing will discuss regulatory solutions and best practices for AI/ML lending models that protect against bias while fostering responsible innovation. Artificial Intelligence and Machine Learning in Financial Services and Housing Financial institutions and housing companies have long used algorithms—pre-coded sets of instructions and calculations executed automatically—to enable computers to make decisions, notably in the lending and investment management industries. Faster computing power, cheaper data storage, and internet-based products have increased the prevalence of algorithms across all sectors of the economy, including financial services and housing. ML, which is a subfield of AI technologies in which algorithms automatically improve their performance through experience with little or no human input, has also grown in usage and sophistication. In particular, ML models and other AI technologies have been used by financial institutions to: (1) flag unusual transactions for fraud detection and financial crime monitoring; (2) personalize consumer services; (3) make credit decisions; (4) inform risk management forecasting and auditing; and (5) identify potential cybersecurity threats. Taken together, AI and ML models can potentially improve efficiency and performance and reduce costs for financial institutions, but they can also introduce risks. In particular, the use of AI may be problematic due to a lack of explainability, which is when it is difficult to fully understand or properly explain why programs made certain decisions. ML models can also have training data biases, which is when a model has biases due to the limited or incorrect dataset it was developed on. Relevant Laws and Recent Rulemaking on AI and ML As AI and ML become more prevalent, regulators have struggled to adapt their oversight processes, especially since many of these laws did not contemplate the use of these emerging technologies when they were enacted. Recently, the five members of the Federal Financial Institutions Examination Council (Federal Reserve, Federal Deposit Insurance Corporation, National Credit Union Administration, Office of the Comptroller of the Currency, and Consumer Financial Protection Bureau) put out a request for information to financial institutions and stakeholders on the use of AI, including ML, in the financial services space, and how laws and regulations related to housing, credit, and consumer lending are implicated. Additionally, the Federal Trade Commission recently provided guidance to companies on how to use artificial intelligence with an aim for “truth, fairness and equity.” Financial institutions using algorithms are now increasingly being asked to explain to regulators why something happened or didn’t happen, how failure and success were defined, and how errors were corrected. These factors translate into regular audits of algorithms for bias and discrimination by regulators or independent third-parties. For instance, in the European Union, the General Data Protection Regulation (GDPR) requires organizations to be able to explain their algorithmic decisions. Some observers have argued that the lack of explainability and transparency of AI and ML models poses a significant challenge for... Hearing page: https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=407749

Comments