0
02/12/2020 - "Task Force on Artificial Intelligence: Equitable Algorithms..." - (EventID=110499)
Views
2381
2/12/2020, 9:14 PM
Video Description
Connect with the House Financial Services Committee Get the latest news: https://financialservices.house.gov/ Follow us on Facebook: https://www.facebook.com/FinancialDems/ Follow us on Twitter: https://twitter.com/FSCDems Wednesday, February 12, 2020 (2:00 PM) -- Task Force on Artificial Intelligence: Equitable Algorithms: Examining Ways to Reduce AI Bias in Financial Services ________ This single-panel hearing will have the following witnesses: • Dr. Philip Thomas, Assistant Professor and co-director of the Autonomous Learning Lab, College of Information and Computer Sciences, University of Massachusetts Amherst • Dr. Makada Henry-Nickie, David M. Rubenstein Fellow, Governance Studies, Race, Prosperity, and Inclusion Initiative, Brookings Institute • Dr. Michael Kearns, Professor and National Center Chair, Department of Computer and Information Science at the University of Pennsylvania • Ms. Bärí A. Williams, Attorney and Emerging Tech AI & Privacy Advisor • Mr. Rayid Ghani, Distinguished Career Professor, Machine Learning Department and Heinz College of Information Systems and Public Policy, Carnegie Mellon University Overview There is no single, commonly agreed upon definition of Artificial Intelligence (“AI”) broadly or within financial services. The term AI is often conflated with the technological capabilities and desired outcomes AI systems pursue. As Figure 1 below illustrates, AI is a broad field involving systems representing intelligent behavior across a range of cerebral tasks (e.g., alphabetically organizing all people with income under $50,000). And one of the key technologies in AI is machine learning (“ML”). ML is best defined as a process that may rely on pre-set rules to solve problems (also known as algorithms) without or limited human intervention (e.g., fraudulent transactions tend to have these features or how to price an illiquid security). The aforementioned techniques can also find patterns in large amounts of data (e.g., concluding that consumers who often search for payday loan providers once a month compared to consumers who search two times a month are likely to be paid monthly). Many observers assert that both industry and regulators would benefit from setting standards and developing tools to ensure that AI technology is reliance, accurate, and fair. The National Institute of Standards and Technology’s (“NIST”) has published a document entitled, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools,” and the Office of Management and Budget (“OMB”) and Office of Science and Technology Policy (“OSTP”) have jointly released, “Principles for the Stewardship of AI Applications.” NIST’s plan identifies nine focus areas for standards, including data and knowledge, performance testing and reporting methodology, risk management, and trustworthiness. The NIST plan recommends tools to support the advance of reliable and trustworthy AI programs. OMB’s and OSTP’s guidance include ten principles focused on using pilot programs, deferring to independent standards organizations, and directing that new AI regulations should limit regulatory oversight. Ethical and Big Data Concerns in AI The concerns and risks of algorithmic decision-making and AI technologies are well documented. Generally, the complexity of decision-making processes utilized in these technologies makes it difficult for human programmers to predict what the program will do and explain why it did what it did. The complexity problem stems from opaque and ever evolving algorithms, especially those dealing with large data sets. Regardless of how financial institutions underwrite a loan or extend credit, utilizing an unexplainable algorithm raises significant concerns in light of the need to comply with existing laws and regulations. Further, human programmers may unknowingly write historical biases into their programs. The potential for programmers to perpetuate historical bias unintentionally may be exacerbated if the companies and teams developing the programs lack racial and ethnic diversity, as well as diversity of experiences and points of view. In addition to the technical concerns of algorithmic decision-making and AI technologies, it is equally important to focus on the data sets that are being used by AI. The data sets can contain errors, are incomplete, and/or contain data that reflect societal or historical inequities. Substandard data or “dirty data” is problematic for AI programs because, all following inferences are susceptible to inaccuracies, incompleteness, or non-representative information. A financial services illustration of “dirty data” is exemplified by the under-representation of protected groups (e.g. sex, race, color, national origin, etc.) in historical loan data. Because members of these groups have been less likely to be approved for loans,... Hearing page: https://financialservices.house.gov/calendar/eventsingle.aspx?EventID=406120
Comments

Markup of Various Measures (EventID=118290)
5/18/2025, 12:57 PM

Examining Treasury Market Fragilities and Preventative Solutions (EventID=118256)
5/12/2025, 6:39 PM

Enhancing Competition: Shaping the Future of Bank Mergers and De Novo Formation (EventID=118234)
5/11/2025, 10:53 PM

Expanding Choice and Increasing Supply: Housing Innovation in America (EventID=118233)
5/11/2025, 8:25 PM

Democratic Hearing To Discuss Trump’s Crypto Corruption and Conflicts of interest -
5/6/2025, 2:20 PM

The Annual Testimony of the Secretary of the Treasury on the State of the Int... (EventID=118203)
5/4/2025, 8:48 PM

Joint Hearing: American Innovation and the Future of Digital Assets: A Blueprint... (EventID=118185)
5/4/2025, 8:20 PM

Markup of Various Measures (EventID=118145) Part 2
4/30/2025, 7:14 PM

Markup of Various Measures (EventID=118145)
4/28/2025, 3:19 PM

Exposing the Proxy Advisory Cartel: How ISS & Glass Lewis Influence Markets (EventID=118146)
4/22/2025, 6:43 PM