Insurance companies that do business in Europe are working now to comply with a new set of regulatory and risk management requirements known as Solvency II. These new rules take effect in late 2012.
Insurance risk often depends on location, so those responsible for GIS and location intelligence will play a key role in developing and implementing recommendations. Given the dollars at stake, it is not surprising that many technology firms are now promoting point-level Solvency II solutions. The truth, however, is that insurance companies may be better served by building up and enhancing their core GIS capabilities, especially in the areas of data quality, geocoding, data enrichment and predictive analytics. In this way, they will not only comply with Solvency II requirements in the most cost-efficient manner - they will actually improve their overall business operations.
Solving for the three pillars of Solvency II
In simplest terms, "solvency" involves an organization's ability to meet its long-term expenses. In the case of Solvency II, members of the European Union have developed standards to facilitate the development of a single market for insurance services across Europe. By setting minimal capital requirements, the EU wants to provide consumers with an adequate level of protection.
For large insurance companies, these capital requirements could total hundreds of millions of dollars or more. Therefore, the ability to accurately measure assets and calculate risks can have a significant impact on cash flow and investment capital.
If a company overestimates its capital requirements, the insurers could be at a competitive disadvantage – with money tied up that could otherwise be used for investments or expansion. Many insurers fear that this could happen if they use the standard model provided by regulators. The EU directive, however, allows insurers to develop and certify their own internal model to calculate the solvency capital requirements. While adopting an internal modeling approach can offer a significant capital reduction, this can only be achieved against a backdrop of "accurate, complete and appropriate" data, as stated in the EU Solvency II Directive. In other words, the effectiveness of an internal model cannot be guaranteed without easy access to high quality, historical and predictive data.
Organizations familiar with Basel II from the banking industry will recognize the three pillars of Solvency II compliance:
Pillar 1: quantitative requirements
- Measure assets, liability and capital
- Calculate minimal capital requirements
- Understand risk dependencies and interactions
Pillar 2: overall governance
- Internal controls and risk management
- Enterprise-wide visibility to key information
- Consistent risk management
Pillar 3: disclosure
- Transparent market disclosure
- Frequent, forward-looking and relevant
- Providing consistent information on a timely basis
Data integration, data quality, geocoding, data enrichment and predictive analytics play an important role in each of these three pillars, especially for the quantitative requirements. Insurance companies that already have tools and technologies to support these disciplines may benefit by upgrading and enhancing their capabilities in order to meet challenges associated with Solvency II, including the need to:
- Implement a consistent approach to risk management across the organization
- Aggregate risk across lines of business and geographic areas
- Increase confidence in internal models to satisfy auditors and regulators
- Streamline and automate overall risk management
- Comply with all regulatory requirements
- Reduce solvency capital requirements
Meeting these demands requires a comprehensive, integrated approach to risk analysis; an automated approach to data management with clear ownership and high quality data; and a strategic, business-focused design that is easy to understand and manage. While each insurer will face unique challenges, there are five characteristics of an effective Solvency II model.
One: start with high quality data
While initial attention has focused on getting the capital calculations correct, this investment will not pay off unless these calculations are driven by accurate, complete and appropriate data. Poor data quality can impact the modeling process in a number of ways such as calculation failures, punitive default values, increased manual intervention and delays in model updating. This can ultimately lead to an increase in the level of capital held as the regulators place a capital charge on top of the firm's own assessment.
Unfortunately, many organizations are not satisfied with their data quality, citing incorrect information, missing or misfiled data, duplicate records and inconsistent standards that lead to significant costs, delays and an incomplete understanding of the truth. Considering the need to aggregate and account for market risk, operational risk, credit risk and insurance risk across geographies and multiple lines of business, it is easy to see why data quality is so important.
Actuaries and compliance groups responsible for doing the necessary calculations will need accurate data, which can be delivered though:
- Data auditing: to understand the quality of data it is necessary to profile and monitor data quality across the enterprise
- Data cleansing and validation: automated ways to standardize, normalize, parse and validate data such as addresses and names
- Data matching and consolidation: comparing and consolidating data records obtained from a variety of sources through de-duplication and data synchronization
- Data integration: most insurance companies maintain multiple databases, but the right tools make it easy to access, extract and analyze records and create a single view of customers and insured assets
Two: geocode with confidence
When it comes to assessing insurance risk, location is everything. Geocoding turns addresses into geographic coordinates that can be measured, compared, accumulated and analyzed by using location-based analysis.
Consumer-oriented geocoding solutions can often be acquired at little or no cost, but organizations that use information to make business decisions need to be more concerned with the validity and accuracy of addresses. A business-strength application will offer the ability to cleanse, parse, standardize and validate addresses before determining location, which adds confidence to the process.
Even when source addresses are fully validated, the geocoding engine needs to ensure that the address is located at the right spot. Some geocoding tools provide latitude and longitude coordinates based on postcode or city centroids instead of addresses. This level of accuracy might be acceptable for risk accumulations based on CRESTA (Catastrophe Risk Evaluating and Standardizing Target Accumulations) zones or other administrative boundaries, but more accuracy is required when assessing risk of flooding or analyzing fire and terrorism accumulations. Other geocoding solutions return geo-coordinates without providing any details, or providing insufficient details, regarding the obtained accuracy level. These "false positives" can create a false sense of confidence, which can increase the risk for poor decisions. A first-class geocoding solution will provide the match accuracy, positional accuracy and geoconfidence level - and enter a path of exception processing if the potential for incorrect results exists.
Three: make the necessary connections through predictive analytics
Ultimately, the goal of any solution is to provide answers, not latitudes and longitudes. A best practice approach will combine geocoding with the ability to spatially enrich the data, perform analysis, make calculations and conduct predictive analytics.
If you can verify that an insured is not in a high-risk area such as a flood zone or hurricane path, your aggregate risk will be lower – as will your solvency capital requirements. Automated tools can perform point-in-polygon analysis and closest site analysis and can calculate distances among multiple points so you can more accurately calculate and assess risk concentrations. Combined with mapping tools, the ability to visualize risk supports human validation and decision making within any exception handling process.
Four: find ways to integrate multiple functions
The solutions employed to meet Solvency II requirements need to be simple to use and flexible enough to meet different business requirements. Identifying a single, modular technology platform that matches up with overall corporate objectives helps ensure a consistent standard will be applied in every market. Maintaining one platform reduces cost of ownership and can speed up system integration. A single interface also simplifies training and education, and makes it easier to gain the skills and capabilities needed to achieve a competitive advantage.
When you can find ways to link portfolio, rating and loss characteristics with exposure and market data - then centralize data validation, standardization, geocoding and spatial analysis in a single enterprise platform, you can:
- Gain confidence and consistency in your data and internal model
- Speed up time to market
- Reduce errors through automated data capture and data quality processes
- Produce transparent insight into the actual risk
Five: add value beyond Solvency II
While there is a place for point-level solutions designed specifically for Solvency II, insurers may be better served by building and enhancing their overall capabilities in data integration, data quality, geocoding and spatial analysis. These core capabilities can help them step up to the demands of Solvency II and reduce solvency capital requirements - but they also add value across the entire operation. From territory assignment, marketing and pricing to straight-through underwriting, online quotation systems, natural catastrophe modeling and claims management, the power of location intelligence can pay dividends in many ways.
Ed. note: This article was updated after it first appeared to address the fact that the date Solvency II goes into effect is not firmly set.