A new sponsored episode of the HousingWire Daily podcast — hosted live from the 2024 HousingWire AI Summit — features Editor in Chief Sarah Wheeler in conversation with CoreLogic data experts Amy Gromowski and Anand Singh.
Gromowski, the company’s head of data science, and Singh, the vice president of GenAI property insights, discussed the importance of data in the development of artificial intelligence and offered advice for companies that wish to integrate AI into their business processes.
The conversation begins with a talking point on the role that data plays in AI development and the best way for companies to package their datasets. Singh explains that modern AI tools require increasingly larger amounts of data than ever before. He follows up by explaining that the size of the dataset is only half the battle when it comes to developing AI tools. The data’s accuracy plays a large role, especially when companies use AI for key decision-making processes like mortgage underwriting.
Gromowski adds that datasets must include in-depth historical information to inform future AI-driven decisions. Data diversity — meaning that data should come from multiple sources— also plays a huge role. Singh emphasizes the importance of using accurate data for real estate decision-making. He said that CoreLogic uses data to create sources of truth for clients to make accurate decisions.
Wheeler follows up with a question on infusing accurate data with the larger datasets that AI models like OpenAI utilize. Singh explains that general AI models cannot answer detailed real estate-related prompts without referring to general resources. Therefore, including accurate and diverse data is vital in developing AI and gathering good data.
“Your AI needs to be fueled by data, but your data needs to be unlocked by AI, so it goes both ways,” Singh said.
Following that point, the conversation shifts toward responsible use of AI. Gromowski segments responsible AI into three considerations:
- Data use rights. CoreLogic ensures that its data is used under the proper legal contracts and obligations.
- Ethics. The company avoids disparate impacts with AI models to ensure that all AI-generated solutions are fair and equitable for consumers.
- Risk factors. CoreLogic identifies its ethical obligations and reputational risks before using or building AI solutions.
CoreLogic relies on an internal governance structure to guide its use of AI solutions based on what happens in the larger regulatory market. Singh points out the potential bias for the real estate industry when it comes to generating data and said that CoreLogic’s internal AI governance team has a larger responsibility to evaluate data on the front end.
From there, Wheeler segues into AI cybersecurity. From a technology standpoint, Singh shares that CoreLogic’s AI models are closed to the public and controlled by the company. CoreLogic also monitors AI for security breaches and damaging malware. In terms of data security, the company screens its datasets for anything that could harm the public.
Next, the conversation shifts to how CoreLogic’s clients can utilize AI solutions in their business processes. Singh said that the firm has used AI for decades to assist clients with risk assessment, valuation, image extraction and more. Today, AI can assist with sales conversations, client communications, software engineering and other areas. Real estate agents can also use AI to handle day-to-day tasks like creating listings, artwork and other mundane tasks.
The conversation concludes with a point on the diversity of CoreLogic’s data across multiple industries. Wheeler asks whether data diversity helps CoreLogic to better inform their clients. Singh says that diverse datasets give CoreLogic’s AI tools a unique understanding of business processes in several areas of the industry — and how these processes intersect in terms of value. Doing so ensures that all AI-generated solutions accommodate the best interests of all parties involved in a real estate transaction.