Introduction
The new IFRS 17 accounting standard has upended insurance financial reporting, bringing unprecedented data challenges and opportunities. Effective from 2023, IFRS 17 requires insurers to capture and report far more granular data across actuarial, finance, claims, and legacy systems. Many insurers underestimated the effort – implementing IFRS 17 often meant heavy investments in data integration and IT systems to meet tight deadlines. This isn’t just a finance exercise; it’s an enterprise-wide data transformation. Insurance CIOs, CTOs, CDOs and other technology leaders must tackle massive data volumes, siloed legacy platforms, and rigorous compliance demands – all while accelerating insight delivery and controlling costs.
The Real-World Data Challenges of IFRS 17
IFRS 17’s complexity has surfaced several real-world data challenges for insurers. Understanding these pain points is the first step toward crafting a solution:
- Integrating Siloed Legacy Systems: Insurers often run multiple policy admin, claims, actuarial (e.g. Prophet), and finance systems that were never designed to work together. IFRS 17 mandates consistent, consolidated reporting, so ensuring data accuracy and integration from multiple sources is a significant challenge. Different systems output data in various formats (Excel files, text extracts, database tables), making end-to-end integration a tedious task. As one industry leader put it, “the biggest challenge has been putting the various systems together and making sure the data flows appropriately through [the architecture]”. Without an integration strategy, firms face manual data wrangling and reconciliation headaches.
- High Data Volume and Granularity: IFRS 17 dramatically increases the volume and granularity of data to be managed. Firms must handle detailed cash-flow and contract-level data for millions of policies – far more than under previous IFRS 4. Larger volumes of data at greater granularity drive tremendous complexity in data architecture2. Many insurers resorted to thousands of spreadsheets and ad-hoc databases to bridge gaps, which is neither scalable nor sustainable. KPMG observes that sourcing, integrating, and cleansing high-quality data is still an unresolved hurdle for many insurers. The data intensity of IFRS 17 also strains infrastructure for storage and processing, especially during peak reporting cycles.
- Data Quality and Compliance Pressure: Regulatory compliance under IFRS 17 leaves zero tolerance for errors. Financial results must be transparent, auditable, and trusted by regulators and investors. This puts a premium on data quality and governance. Insurers need to reconcile data across actuarial models, general ledgers, and reporting systems with absolute accuracy. As data platform company Atlan notes, IFRS 17 compliance “requires insurers to analyze data from actuarial systems, trading systems, claims, and accounting – and ensuring high-quality data across all these is essential”. Inaccurate or inconsistent data (e.g. misclassified contracts, missing fields) can distort profit calculations, leading to regulatory scrutiny or restatements. Yet initial implementations saw significant manual workarounds and data cleansing efforts due to tight timelines. The challenge for IT leaders is to deliver clean, validated data by design, not by heroic manual fixes after the fact.
- Slow, Manual Reporting Cycles: Legacy processes have made IFRS 17 reporting painfully slow at many insurers. Prior to automation, it was common for monthly closes to take 3+ weeks and quarterly reporting over a month. Such timelines are untenable under IFRS 17’s demand for timely disclosure. Spreadsheet-based workflows and manual reconciliations not only delay reporting but also introduce errors. Deloitte warns that IFRS 17’s added complexity and granularity put additional pressure on “fast close” processes, requiring new levels of efficiency. In short, insurers must modernize their data pipelines or risk missed deadlines and control issues.