Newer
Older
grobid-corpus / segmentation / public / tei / 1309.7222.training.segmentation.tei.xml
@zeynalig zeynalig on 26 Apr 2017 100 KB initialisation des corpus
<?xml version="1.0" ?>
<tei>
	<teiHeader>
		<fileDesc xml:id="_1309.7222"/>
	</teiHeader>
	<text xml:lang="en">
			<front> Continuous compliance: a proxy-based <lb/>monitoring framework <lb/> Julien VEDANI  * † ‡ <lb/> Fabien RAMAHAROBANDRO  * § <lb/> September 26, 2013 <lb/> Abstract <lb/> Within the Own Risk and Solvency Assessment framework, the Solvency II <lb/>directive introduces the need for insurance undertakings to have efficient tools en-<lb/>abling the companies to assess the continuous compliance with regulatory solvency <lb/>requirements. Because of the great operational complexity resulting from each <lb/>complete evaluation of the Solvency Ratio, this monitoring is often complicated to <lb/>implement in practice. This issue is particularly important for life insurance com-<lb/>panies due to the high complexity to project life insurance liabilities. It appears <lb/>relevant in such a context to use parametric tools, such as Curve Fitting and Least <lb/>Squares Monte Carlo in order to estimate, on a regular basis, the impact on the <lb/>economic own funds and on the regulatory capital of the company of any change <lb/>over time of its underlying risk factors. <lb/>In this article, we first outline the principles of the continuous compliance re-<lb/>quirement then we propose and implement a possible monitoring tool enabling to <lb/>approximate the eligible elements and the regulatory capital over time. In a final <lb/>section we compare the use of the Curve Fitting and the Least Squares Monte Carlo <lb/>methodologies in a standard empirical finite sample framework, and stress adapted <lb/>advices for future proxies users. <lb/> Key words Solvency II, ORSA, continuous compliance, parametric proxy, Least <lb/>Squares Monte Carlo, Curve Fitting. <lb/>

			*  Milliman Paris, 14 rue Pergì ese, 75116 Paris, France <lb/> † Université Claude Bernard Lyon 1, ISFA, 50 Avenue Tony Garnier, F-69007 Lyon, France <lb/> ‡ Email: julien.vedani@etu.univ-lyon1.fr <lb/>  § Email: fabien.ramaharobandro@milliman.com <lb/></front>
			
			<page>1 <lb/></page> 
			 
			<front>arXiv:1309.7222v1  [q-fin.RM] 27 Sep 2013 <lb/></front>

			<body> 1 Introduction <lb/> The Solvency II directive (European Directive 2009/138/EC), through the Own Risk <lb/>and Solvency Assessment process, introduces the necessity for an insurance undertak-<lb/>ing to be capable of assessing its regulatory solvency on a continuous yearly basis. <lb/>This continuous compliance requirement is a crucial issue for insurers especially for <lb/>life insurance companies. Indeed, due to the various asset-liability interactions and to <lb/>the granularity of the insured profiles (see e.g. Tosetti et al. [31] and Petauton [28]), the <lb/>highly-stochastic projections of life insurance liabilities constitute a tricky framework <lb/>for the implementation of this requirement. <lb/>In the banking industry the notion of continuous solvency has already been investi-<lb/>gated through credit risk management and credit risk derivatives valuation, considering <lb/>an underlying credit model (see e.g. Jarrow et al. [19] and Lonstaff et al. [22]). The <lb/>notions of ruin and solvency are different in the insurance industry, due in particular to <lb/>structural differences and to the specific Solvency II definitions. In a continuous time <lb/>scheme these have been studied in a non-life ruin theory framework, based on the ex-<lb/>tentions of the Cramr-Lundberg model [23], see e.g. Pentikinen [26], Pentikinen et al. <lb/> [27] and Loisel and Gerber [21]. In a life insurance framework, considering more em-<lb/>pirical schemes, closed formulas can be found under strong model assumptions. This <lb/>field has for example been investigated in Bonnin et al. [2] or Vedani and Virepinte <lb/>[33]. However, all these approaches are based on relatively strong model assumptions. <lb/>Moreover, on a continuous basis the use of such approaches generally faces the prob-<lb/>lem of parameters monitoring and needs adaptations to be extended to the continuous <lb/>compliance framework. <lb/>Monitoring a life insurance liabilities is very complex and will have to introduce <lb/>several stability assumptions in order to develop a practical solution. The great time and <lb/>algorithmic complexity to assess the exact value of the Solvency Ratio of an insurance <lb/>undertaking is another great issue. In practice, an only complete solvency assessment <lb/>is required by the directive: the insurance undertakings have to implement a complete <lb/>calculation of their Solvency Capital Requirement and of their eligible own funds at <lb/>the end of the accounting year. We have identified two possibilities to investigated <lb/>in order to implement a continuous compliance tool, either to propose a proxy of the <lb/>Solvency Ratio, easy enough to monitor, or directly to address the solvency state (and <lb/>not the solvency level). This last possibility leading to little information in terms of risk <lb/>measurement we have chosen to consider the first one, based on the actual knowledge <lb/>on the polynomial proxies applied to life insurance Net Asset Value (see e.g. Devineau <lb/>and Chauvigny [10]) and Solvency Ratios (Vedani and Devineau [32]), that is to say <lb/>Least Squares Monte Carlo and Curve Fitting. <lb/>Throughout Section 2 we lay the foundations of the continuous compliance re-<lb/>quirement adapted to life insurance. We underline and discuss the article designing the <lb/>continuous compliance framework and present the major difficulties to address when <lb/>implementing a monitoring tool. In Section 3 we propose a continuous compliance <lb/>assessment scheme based on a general polynomial proxy methodology. This tool is <lb/>implemented in Section 4, using a Least Squares Monte Carlo approach, on a standard <lb/>life insurance product. The Least Squares Monte Carlo approach is generally preferred, <lb/>in practice, to Curve Fitting because of its &quot; supposed &quot; advantages as soon as a large <lb/>dimension context is concerned, which is the case in our continuous compliance mon-<lb/>itoring scheme. We challenge this hypotheses in Section 5 where we implement both <lb/>

			<page>2 <lb/></page>

			methodologies in various dimension frameworks and compared the obtained results. <lb/> 2 Continuous compliance <lb/> The requirement for continuous compliance is introduced in Article 45(1)(b) of the <lb/>Solvency II Directive [6]: &quot; As part of its risk-management system every insurance <lb/>undertaking and reinsurance undertaking shall conduct its own risk and solvency as-<lb/>sessment. That assessment shall include at least the following: (...) the compliance, on <lb/>a continuous basis, with the capital requirements, as laid down in Chapter VI, Sections <lb/>4 and 5 &quot;  1 . <lb/>In this section, we will first remind briefly what these capital requirements are and <lb/>what they imply in terms of modelling and calculation. We will then discuss continuous <lb/>compliance, what it entails and what issues it brings up for (re)insurance companies. <lb/>Finally we will highlight some key elements to the setting of a continuous compliance <lb/>framework in this business area. <lb/> 2.1 Capital requirements <lb/> 2.1.1 Regulatory framework <lb/> The capital requirements laid down in Chapter VI, Sections 4 and 5 are related to the <lb/>Solvency Capital Requirement, or SCR (Section 4), and the Minimum Capital Require-<lb/>ment, or MCR (Section 5). <lb/>The SCR corresponds to the Value-at-Risk of the basic own funds of the company <lb/>subject to a confidence level of 99.5% over a one-year period. It has to be calculated <lb/>and communicated to the supervisory authority. Additionally, companies falling within <lb/>the scope of the Financial Stability Reporting will have to perform a quarterly calcu-<lb/>lation (limited to a best effot basis) and to report its results. Companies will have to <lb/>hold eligible own funds higher or equal to the SCR. Failing to do so will trigger a <lb/>supervisory process aiming at recovering a situation where the eligible own funds are <lb/>in excess of the SCR. The SCR can be calculated using the Standard Formula -a set of <lb/>methodological rules set out in the regulatory texts -or an internal model (see below <lb/>for further details). <lb/>The MCR is a lower requirement than the SCR , calculated and reported quarterly. <lb/>It can be seen as an emergency floor. A breach of the MCR will trigger a supervisory <lb/>process that will be more severe than in the case of a breach of the SCR and could <lb/>lead to the withdrawal of authorization. The MCR is calculated through a factor-based <lb/>formula. The factors apply to the technical provisions and the written premiums in <lb/>non-life and to the technical provisions and the capital at risk for life business. It is <lb/>subject to an absolute floor and a floor based on the SCR. It is capped at 45% of the <lb/> SCR. <lb/> 

			<note place="footnote"> 1 Article 45(1)(b) also introduces continuous compliance &quot; with the requirements regarding technical pro-<lb/>visions, as laid down in Chapter VI, Section 2 &quot; . This means that the companies should at all times hold <lb/>technical provisions valued on the Solvency II basis. This implies that they have to be able to monitor the <lb/>evolution of their technical provisions between two full calculations. The scope of this article is limited to <lb/>continuous compliance with capital requirements. <lb/></note>

			<page> 3 <lb/></page>

			This paper focuses on the estimation of the eligible own funds and the SCR. Basi-<lb/>cally, the MCR will not be used as much as the SCR when it comes to risk management, <lb/>and compliance with the SCR will imply compliance with the MCR. <lb/> 2.1.2 Implementation for a life company <lb/> The estimation of the eligible own funds and the SCR requires to carry out calculations <lb/>that can be quite heavy. Their complexity depends on the complexity of the company&apos;s <lb/>portfolio and the modelling choices that are made, in particular between the Standard <lb/>Formula and an internal model. In this section, we present the key issues to be dealt <lb/>with by a life insurer. <lb/> Implementation scheme. To assess the SCR it is necessary to project ecenomic bal-<lb/>ance sheets and calculate best estimates. <lb/>For many companies, the bulk of the balance sheet valuation lies in the estimation <lb/>of these best estimates. This can imply quite a long and heavy process, since the <lb/>assessment is carried out through simulations and is subject, amongst other things, to <lb/>the following constraints: <lb/> • updating the assets and liabilities model points; <lb/> • constructing a set of economic scenarios under the risk-neutral probability and <lb/>checking its market-consistency; <lb/> • calibrating and validating the stochastic model through a series of tests (e.g.: <lb/>leakage test); <lb/> • running simulations. <lb/>The valuation of the financial assets may also be quite time-consuming if a signifi-<lb/>cant part of the portfolio has to be marked to model. <lb/> SCR calculation through the Standard Formula. The calculation of the SCR through <lb/>the Standard Formula is based on the following steps: <lb/> • calculation of the various standalone SCR; <lb/> • aggregation; <lb/> • adjustment for the risk absorbing effect of technical provisions and deferred <lb/>taxes; <lb/> • calculating and adding up the capital charge for operational risk. <lb/>Each standalone SCR corresponds to a risk factor and is defined as the difference <lb/>between the current value of the eligible own funds and their value after a pre-defined <lb/>shock on the risk factor. As a consequence, for the calculation of each standalone SCR <lb/> a balance sheet valuation needs to be carried out, which means that a set of simulations <lb/>has to be run and that the assets must be valued in the economic conditions after shock. <lb/>

			<page>4 <lb/></page>

			SCR calculation with a stochastic internal model. An internal model is a model <lb/>designed by the company to reflect its risk profile more accurately than the Standard <lb/>Formula. Companies deciding not to use the Standard Formula have the choice be-<lb/>tween a full internal model and a partial internal model. The latter is a model where <lb/>the capital charge for some of the risks is calculated through the Standard Formula <lb/>while the charge for the other risks is calculated with an entity-specific model. There <lb/>are two main categories of internal models 2 : <lb/> • models based on approaches similar to that of the Standard Formula, whereby <lb/>capital charges are calculated on the basis of shocks; the methodology followed <lb/>in this case is the same as the one described in Subsection 2.1.2; <lb/> • fully stochastic models: the purpose of this type of model is to exhibit a prob-<lb/>ability distribution of the own funds at the end of a 1-year period, in order to <lb/>subsequently derive the SCR , by calculating the difference between the 99.5% <lb/>quantile and the initial value. <lb/>In the latter case, the calculations are based on a methodology called Nested Sim-<lb/>ulations. It is based on a twofold process of simulations: <lb/> • real-world simulations of the risk factors&apos; evolution over 1 year are carried out; <lb/> • for each real-world simulation, the balance sheet must be valued at the end of <lb/>the 1-year period. As per the Solvency II requirements, this valuation has to be <lb/>market-consistent. It is carried out through simulations under the risk-neutral <lb/>probability. <lb/>More details on Nested Simulations can be found in Broadie et al. [4] or Devineau <lb/>and Loisel [11]. <lb/> 2.2 An approach to continuous compliance <lb/> In the rest of this article we focus the scope of our study to life insurance. <lb/> 2.2.1 Defining an approach <lb/> As mentioned above, the Solvency II Directive requires companies to permanently <lb/>cover their SCR and MCR. This is what we refer to as continuous compliance in this <lb/>paper. The regulatory texts do not impose any specific methodology. Moreover the <lb/>assessment of continuous compliance is introduced as an element of the Own Risk and <lb/>Solvency Assessment (ORSA), which suggests that the approach is for each company <lb/>to define. <lb/>Different approaches can be envisaged. Here below we present some assessment <lb/>methodologies that companies can rely on and may combine in a continuous compli-<lb/>ance framework. <lb/> • Full calculations: i.e. the same calculations as those carried out for annual re-<lb/>porting to the supervisory authority: this type of calculations can be performed <lb/>

			<note place="footnote"> 2 These approaches can be mixed within one model. <lb/></note>

			<page> 5 <lb/></page>

			several times during the year. However the process can be heavy and time-<lb/>consuming, as can be seen from the description made in Subsection 2.1.2. As a <lb/>consequence, it seems operationally difficult to carry out such calculations more <lb/>than quarterly (actualy most companies are likely to run full calculations only <lb/>once or twice a year). <lb/> • Simplified full calculations: companies may decide to run calculations similar <lb/>to those described in the previous item but to freeze some elements. For example <lb/>they could decide not to update the liabilities model points if the portfolio is <lb/>stable and if the time elapsed since the last update is short; they could also decide <lb/>to freeze some modules or sub-modules that are not expected to vary significantly <lb/>over a short period of time. <lb/> • Proxies: companies may develop methods to calculate approximate values of <lb/>their Solvency Ratio 3 (SR). Possible approaches include, among others, abacuses <lb/>and parametric proxies. <lb/> • Indicators monitoring: as part of their risk management, companies will moni-<lb/>tor risk indicators and set limits to them. These limits may be set so that respect-<lb/>ing them ensures that some SCR modules stay within a given range. <lb/> 2.2.2 Overview of the proposed approach <lb/> The approach presented in this paper relies on the calibration of proxies allowing to <lb/>estimate the SR quickly and taking as input a limited number of easy-to-produce indi-<lb/>cators. It has been developed for life companies using the Standard Formula. <lb/> Proxies: generic principles. Simplifying the calculations requires limiting the num-<lb/>ber of risks factors that will be monitored and taken into account in the assessment <lb/>to the most significant. For most life insurance companies, these risk factors will be <lb/>financial (e.g.: stock level, yield curve). <lb/>In the framework described in the following sections, the proxies are supposed to be <lb/>potentially used to calculate the SR at any point in time. For operational practicality, the <lb/>inputs have to be easily available. In particular, for each risk factor, an indicator will be <lb/>selected for monitoring purpose and to be used as input for the proxy (see Section 3 for <lb/>more insight about proxies). The selected indicators will have to be easily obtainable <lb/>and reflect the company&apos;s risk profile. <lb/>As explained in Section 3, our approach relies on the development and the cali-<lb/>bration of proxies in order to calculate in a quick and simple way the company&apos;s Net <lb/>Asset Value (NAV ) and the most significant SCR sub-modules. The overall SCR is then <lb/>calculated through an aggregation process based on the Standard Formula&apos;s structure <lb/>and using the tools the company uses for its regulatory calculations. As a consequence, <lb/>a selection has to be made regarding the sub-modules that will be calculated by proxy. <lb/>The others are frozen or updated proportionally to a volume measure (e.g. mortality <lb/> SCR set proportional to the technical provisions). <lb/>

			<note place="footnote"> 3 Solvency Ratio = Eligible Own Funds / SCR. <lb/></note>

			<page> 6 <lb/></page>

			Figure 1: Continuous compliance framework. <lb/> Continuous compliance framework. Under Solvency II, companies will set a fre-<lb/>quency (at least annual) for the full calculation of the SCR  4 . Additionally, they will <lb/>set a list of pre-defined events and circumstances that will trigger a full calculation <lb/>whenever they happen. The proxies will be used to estimate the SR between two full <lb/>calculations and should be calibrated every time a full calculation is performed. This <lb/>process is summarized in Figure 1 below. <lb/>Here below are a few examples of pre-defined events and circumstance, <lb/> • external events (e.g.: financial events, pandemics), <lb/> • internal decisions (e.g.: change in asset mix), <lb/> • risk factors outside the proxies&apos; zone of validity. <lb/> 3 Quantitative approach to assess the continuous com-<lb/>pliance <lb/> Note first that the study presented in this paper was carried out in a context where the <lb/>adjustment for the loss-absorbing capacity of technical provisions was lesser than the <lb/>Future Discretionary Benefits ( &quot; FDB &quot; ) (see Level 2 Implementation Measures [5]). As <lb/>a consequence, the Value of In-Force and the NAV were always calculated net of the <lb/>loss-absorbing effect of future profit participation. In cases where the loss-absorbing <lb/>capacity of technical provisions breaches the FDB, further developments (and addi-<lb/>tional assumptions), not presented in this paper, will be necessary. <lb/>In Section 3 we present a proxy implementation that enables one to assess the <lb/>continuous compliance, and the underlying assumptions. <lb/>

			<note place="footnote"> 4 We are referring here to full calculations in the broad sense: the infra-annual calculations may be sim-<lb/>plified full calculations. <lb/></note>

			<page> 7 <lb/></page>

			3.1 Assumptions underlying the continuous compliance assessment <lb/>framework <lb/> As explained in Subsection 2.2.2, several simplifications will be necessary in order to <lb/>operationalize the continuous compliance assessment using our methodology. <lb/> 3.1.1 Selection of the monitored risks <lb/> First, we need to assume that the company can be considered subject to a limited num-<lb/>ber of significant and easily measurable risks with little loss of information. <lb/>In most cases this assumption is quite strong. Indeed, there are numerous underly-<lb/>ing risks for a life insurance undertaking and these are not always easily measurable. <lb/>For example, the mortality and longevity risks, to cite only those, are greatly difficult <lb/>to monitor on an infra-year time step, simply because of the lack of data. Moreover <lb/>the significant aspect will have to be quantifiably justified. For instance, this signifi-<lb/>cance can be defined considering the known impact of the risk on the SCR or on the <lb/>company&apos;s balance sheet, or considering its volatility. <lb/>In the case of a life insurance business it seems particularly relevant to select the <lb/>financial risks, easily measurable and monitorable. As a consequence, the selected risk <lb/>will for example be the stock, interest rates (corporate, sovereign), implicit volatilities <lb/>(stock / interest rates), illiquidity premium. <lb/>In order to enable a frequent monitoring of the selected risks and of their impact, <lb/>it is necessary to add the assumption that their evolution over time can be satisfy-<lb/>ingly replicated by the evolution of composite indexes defined continuously through <lb/>the monitoring period. <lb/>This assumption is a more tangible translation of the measurable aspect of the risks. <lb/>The objective here is to enable the risks&apos; monitoring through reference indexes. <lb/>For example, an undertaking which is mainly exposed to European stocks can con-<lb/>sider the EUROSTOXX50 level in order to efficiently synthetize its stock level risk. <lb/>Another possibility may be to consider weighted European stock indexes to obtain an <lb/>aggregated indicator more accurate and representative of the entity-specific risk. For <lb/>example, for the sovereign spread risk, it seems relevant for a given entity to monitor <lb/>an index set up as a weighted average of the spread extracted from the various bonds <lb/>in its asset portfolio. <lb/>Eventually, the undertaking must aim at developing a indexes table, similar to the <lb/>following one. <lb/>Table 1: Example of indexes table — Significant risks and their associated indicators. <lb/> Significant risks <lb/>Associated composite indicators <lb/> Stock (level) <lb/>70% CAC40 / 30% EUROSTOXX50 <lb/>Risk-free rate (level) Euro swap curve (averaged level evolution) <lb/>Spread (sovereign) <lb/>Weighted average of the spread by issuing country. <lb/>Weights : % market value in the asset portfolio <lb/>Spread (corporate) <lb/>iTraxx Europe Generic 10Y Corporate <lb/>Volatility (stock) <lb/>VCAC Index <lb/>Illiquidity premium <lb/>Illiquidity premium (see QIS5 formula [7]) <lb/>

			<page>8 <lb/></page>

			Figure 2: Simplified monitoring framework: Illustration <lb/>Generally speaking, all the assumptions presented here are almost induced by the <lb/>operational constraints linked to the definition of the continuous compliance framework <lb/>(full calculation frequence / number of monitores risks). Indeed, it is impossible in <lb/>practice to monitor each underlying risk day by day. We therofore need to restrict the <lb/>framework by selecting the most influent risks and indicators enabling their practical <lb/>monitoring. <lb/>In addition, it is irrelevant to consider too stable risks or risks that cannot be mon-<lb/>itored infra-annually. In this case, they can simply be assumed frozen, or updated <lb/>proportionnaly to a volume measure, through the monitoring period, with little loss of <lb/>information. <lb/>In this simplified framework, a change of the economic conditions over time will <lb/>be summarize in the realized indexes&apos; level transition. It is then possible to build a <lb/>proxy enabling one to approximate quickly the SR at each monitoring date, knowing <lb/>the current level of the composite indicators. <lb/>Figure 2 illustrates the process to follow and the underlying assumptions made in <lb/>a simplified framework. Let us develop a case where the company&apos;s asset portfolio <lb/>can be divided into one stock and one bond pools. Two underlying risks have been <lb/>identified, the stock level risk and the interest rate level risk (average level change of <lb/>the rates curve 5 ). Our assumptions lead to consider that, once the risks associated to <lb/>composite indexes, it is possible to approximate the asset portfolio by a mix between, <lb/> 
			
			<note place="footnote"> 5 Note that other kinds of interest rates risks can be selected in order to address the term structure risk <lb/>more precisely, such as the slope and curvature risks. For more insight on this subject see e.g. Diebold and <lb/>Li [12]. <lb/></note>

			<page> 9 <lb/></page>

			• a stock basket with the same returns, composed with the composite stock index <lb/>only (e.g. 70% CAC 40 / 30% EUROSTOXX50), <lb/> • a bond basket replicating the cash-flows of the bonds discounted using a rate <lb/>curve induiced from the initial curve translated of the average variation of the <lb/>reference rate curve (the &quot; composite &quot; curve, e.g. the Euro swap curve). <lb/>Eventually we can decompose the process presented in Figure 2 between, <lb/> • a vertical axe where one simplifies the risks themselves, <lb/> • and an horizontal axe where one transforms the risk into composite indexes. <lb/>To conclude, note that the assumptions made here will lead to the creation of a basis <lb/>risk. Indeed, even if the considered indexes are very efficient, one part of the insurance <lb/>portfolio sensitivity will be omitted due to the approximations. In particular the risks <lb/>and indexes must be chosen very precisely, entity-specifically. A small mistake can <lb/>have great repercussions on the approximate SR. In order to minimize the basis risk, the <lb/>undertaking will have to back-test the choices made and the underlying assumptions. <lb/> 3.1.2 Selection of the monitored marginal SCR <lb/> The continuous compliance framework and tool presented in this paper applies to com-<lb/>panies that use a Standard Formula approach to assess the SCR value (but can provide <lb/>relevant information to companies that use an internal model). <lb/>In practice it will not be necessary to monitor every marginal SCR of a company. <lb/>Indeed, some risk modules will be little or not impacted by any infra-annual evolution <lb/>of the selected risks. Moreover, a certain number of sub-modules have a small weight <lb/>in the calculation of the Basic Solvency Capital Requirement (BSCR). These too small <lb/>and/or stable marginal SCR will be frozen or updated proportionally to a volume mea-<lb/>sure throughout the monitoring period. <lb/>Eventually, the number of risk modules that will have to be updated precisely (the <lb/>most meaningful marginal SCR) should be reduced to less than ten. Note that, among <lb/>the marginal SCR to recalculate, some can correspond to modeled risks factors but <lb/>others will not correspond to the selected risk factors while being very impacted by <lb/>those (e.g. the massive lapse SCR). <lb/> This selection of the relevant SCR sub-modules will introduce a new assumption <lb/>and a new basis risk, necessary for our methodology&apos;s efficiency. The basis risk associ-<lb/>ated to this assumption, linked to the fact that some marginal SCR will not be updated at <lb/>each monitoring date, can be reduced by considering a larger number of sub-modules. <lb/>One will have to apprehend this problem pragmatically, to take a minimal number of <lb/>risk modules into account in order to limit the number of future calculations, while <lb/>keeping the error made on the overall SCR under control, the best possible way. <lb/> 3.2 Use of parametric proxies to assess the continuous compliance <lb/> In the previous section we have defined a reference framework in which we will de-<lb/>velop our monitoring tool. The proposed methodology aims at calibrating proxies that <lb/>

			<page>10 <lb/></page>

			replicate the central and shocked NAV as functions of the levels taken by the chosen <lb/>indexes. <lb/> 3.2.1 Assumption of stability of the asset and liability portfolios <lb/> We now work with closed asset and liabities potfolios, with no trading, claim or pre-<lb/>mium cash-flow, in order to consider a stable asset-mix and volume of assets and lia-<lb/>bilities. Eventually, all the balance sheets movements are now induced by the financial <lb/>factors. <lb/>This new assumption may seem strong at first sight. However, it seems justified <lb/>on a short term period. In the general case the evolution of these portfolios is slow <lb/>for mature life insurance companies. This evolution is therefore assumed to have little <lb/>significance for the monitoring period of our continuous compliance monitoring tool. <lb/>Eventually, if a significant evolution happens in practice (e.g. a portfolio puchase / sale) <lb/>this will lead to a full recalibration of the tool (see Subsection 4.2.3 for more insight on <lb/>the monitoring tool governance). <lb/> 3.2.2 Economic transitions <lb/> Let us recall the various assumptions considered until now. <lb/> • H1 : The undertaking&apos;s underlying risks can be summarized into a small pool of <lb/>significant and easily quantifiable risks with little loss of information. <lb/> • H2 : The evolution of these risks can be perfectly replicated by monitoring com-<lb/>posite indicators, well defined at each date of the monitoring period. <lb/> • H3 : The number of marginal SCR that need to be precisely updated at each <lb/>monitoring date can be reduced to the most impacting risk modules with little <lb/>loss of information. <lb/> • H4 : The asset and liability portfolio are assumed frozen between two calibration <lb/>dates of the monitoring tool. <lb/>Under the assumptions H1, H2, H3 and H4 it is possible to summarize the im-<lb/>pact of a time evolution of the economic conditions on the considered portfolio into <lb/>an instant level shock of the selected composite indicators. This instant choc will be <lb/>denoted &quot; economic transition &quot; and we will see below that it can be identified to a set <lb/>of elementary risk factors similar to those presented in Devineau and Chauvigny [10]. <lb/>Figure 3: Economic transition &quot; 0 → 0  +  &quot; . <lb/>Let us consider a two shocks framework: the stock level risk, associated to an index <lb/>denoted by S(t) at date t ≥ 0 (t = 0 being the tool&apos;s calibration date) and an interest <lb/>

			<page>11 <lb/></page>

			rate level risk, associated to zero-coupon prices, denoting by P(t, m) the zero-coupon <lb/>of maturity m and date t ≥ 0. Now, let us consider an observed evolution between 0 and <lb/>a monitoring date t &gt; 0. Finaly, to calculate the NAV at date t, under our assumptions, <lb/>it is only necessary to know the new levels S(t), P(t, m). <lb/> The real evolution, from <lb/> S(0), (P(0, m)) m∈1;M <lb/> to <lb/> S(t), (P(t, m)) m∈1;M <lb/> can <lb/>eventually be seen as a risk factors couple, <lb/> ε = <lb/> s  ε = ln <lb/> S(t) <lb/>S(0) <lb/> ,  ZC  ε = − <lb/> 1 <lb/> M <lb/> M <lb/> ∑ <lb/> m=0 <lb/> ln <lb/> 1 <lb/> m <lb/>P(t, m) <lb/>P(0, m) <lb/> , <lb/> denoting by  s  ε (respectively  ZC  ε) the stock (resp. zero-coupon) risk factor. <lb/>This evolution of the economic conditions, translated into a risk factors tuple, is <lb/>called economic transition in the following sections of this paper and can easily be <lb/>extended to a greater number of risks. The risk factor will be used in our algorithm <lb/>to replicate the instant shocks &quot; 0 → 0  +  &quot; equivalent to the real transitions &quot; 0 → t &quot; . <lb/> Moreover, the notion of economic transition will be used to designate either an instant <lb/>shock or a real evolution of the economic situation between 0 and t &gt; 0. In this latter <lb/>case we will talk about real or realized economic transition. <lb/> 3.2.3 Probable space of economic transitions for a given α% threshold <lb/> Let us consider, for example, a 3-months monitoring period (with a full calibrations <lb/>of the monitoring tool at the start and at the end of the period). It is possible to a <lb/>priori assess a probable space of the probable quarterly economic transitions, under <lb/>the historic probability P and for a given threshold α%. One simply has to study a <lb/>deep enough historical data summary of the quarterly evolutions of the indexes and <lb/>to assess the interval between the 1−α% <lb/>2 <lb/> and the 1+α% <lb/>2 <lb/> historical quantiles of the risk <lb/>factors extracted from the historical data set. <lb/>For example, for the stock risk factor  s  ε, knowing the historical summary <lb/> S i <lb/> 4 <lb/> i∈0,4T +1 <lb/> one can extract the risk factor&apos;s historical data set <lb/> s  ε  i <lb/> 4 <lb/> = ln <lb/>  S i+1 <lb/> 4 <lb/> S i <lb/> 4 <lb/> i∈0,4T  <lb/> and ob-<lb/>tain the probable space of economic transitions for a given α% threshold, <lb/> q  1−α% <lb/>2 <lb/> s  ε  i <lb/> 4 <lb/> i∈0;4T  <lb/> ; q  1+α% <lb/>2 <lb/> s  ε  i <lb/> 4 <lb/> i∈0;4T  <lb/> . <lb/> In a more general framework, consider economic transitions represented by J-<lb/> tuples of risk factors ε = <lb/>  1  ε, ...,  J  ε <lb/> of which one can get an historical summary <lb/> 1  ε  i <lb/> 4 <lb/> , ...,  J  ε  i <lb/> 4 <lb/> i∈0;T  <lb/> . The following probable interval of the economic transitions with <lb/>a α% threshold can be used, <lb/> E  α  = <lb/> 1  ε, ...,  1  ε <lb/> ∈ ∏  J <lb/>j=1 <lb/> q  1−α% <lb/>2 <lb/> j  ε  i <lb/> 4 <lb/> i∈0;T  <lb/> ; q  1+α% <lb/>2 <lb/> j  ε  i <lb/> 4 <lb/> i∈0;T  <lb/> . <lb/>Note that such a space does not take correlations into account. Indeed each risk <lb/>factor&apos;s interval is defined independently from the others. In particular, such a space is <lb/> prudent: contains more than α% of the historically probable economic evolutions. <lb/>

			<page>12 <lb/></page>

			Figure 4: Calculation of an estimator of NAV  0  +  (ε) using a Monte Carlo method. <lb/> 3.2.4 Implementation — Replication of the central NAV <lb/> We will now assume that J different risks have already been selected. <lb/>The implementation we will now descibe aims at calibrating a polynomial proxy <lb/>that replicates NAV  0  + (ε), the central NAV at the date t = 0  +  , associated to an eco-<lb/>nomic transition ε = <lb/>  1  ε, ...,  J  ε <lb/> . The proxy will allow, at each monitoring date t, after <lb/>evaluating the observed economic transition ε  t  (realized between 0 and t), to obtain a <lb/>corresponding approximate central NAV value, NAV proxy <lb/> 0  + <lb/> (ε  t  ). <lb/> Notation and preliminary definitions. To build the NAV proxy <lb/> 0  + <lb/> (ε) function, our ap-<lb/>proach is inspired from the Curve Fitting (CF) and Least Squares Monte Carlo (LSMC) <lb/>polynomial proxies approaches proposed in Vedani and Devineau [32]. It is possible <lb/>to present a generalized implementation plan for these kinds of approaches. They both <lb/>aim at approximating the NAV using a polynomial function whose monomials are sim-<lb/>ple and crossed powers of the elements in ε = <lb/>  1  ε, ...,  J  ε <lb/> . <lb/>Let us introduce the following notation. Let Q be a risk-neutral measure condi-<lb/>tioned by the real-world financial information known at date 0  +  , F  0  +  the filtration that <lb/>characterizes the real-world economic information contained within an economic tran-<lb/>sition between dates 0 and 0  +  . Let R u  be the profit realized between u − 1 and u ≥ 1, <lb/>and δ  u  the discount factor at date u ≥ 1 . Let H be the liability run-off horizon. <lb/>The gist of the method is described here below. <lb/>The NAV  0  + (ε) depends on the economic information through the period [0; 0  + ], <lb/> NAV  0  + (ε) = E  Q  <lb/> ∑  H <lb/>t=1  δ  t R t  |F  0  + <lb/> . <lb/>For a given transition ε it is possible to estimate NAV  0  + (ε) implementing a standard <lb/>Asset Liability Management model calculation at date t = 0  +  . In order to do so one <lb/>must use an economic scenarios table of P simulations generated under the probability <lb/>measure Q between t = 0  +  and t = H initialized by the levels (and volatilities if the <lb/>risk is chosen) of the various economic drivers as induced by transition ε. <lb/> For each simulation p ∈ 1; P and date t ∈ 1; H, one has to calculate the profit <lb/>outcome R p <lb/>t  using an Asset-Liability Management (ALM) model and, knowing the <lb/>corresponding discount factor δ  p <lb/>t  , to assess the Monte Carlo estimator, <lb/> NAV  0  + (ε) =  1 <lb/> P  ∑  P <lb/>p=1  ∑  H <lb/>t=1  δ  p <lb/>t R p <lb/>t  . <lb/>

			<page>13 <lb/></page>

			When P = 1 we obtain an inefficient estimator of NAV  0  + (ε) which we will denote <lb/>by NPV  0  + (ε) (Net Present Value of margins), according to the notation of Vedani and <lb/>Devineau [32]. Note that for a given transition ε, NPV  0  + (ε) is generally very volatile <lb/>and it is necessary to have P high to get an efficient estimator of NAV  0  + (ε). <lb/> Methodology. Let us consider a set of N transitions obtained randomly from a proba-<lb/>ble space of economic transitions and denoted by <lb/> ε  n  = <lb/>  1  ε  n  , ...,  J  ε  n  <lb/> n  ∈  1;N <lb/> . We now <lb/>have to aggregate all the N associated risk-neutral scenarios tables, each one initialized <lb/>by the drivers&apos; levels (and volatilities if needed) corresponding to one of the economic <lb/>transitions in the set, in a unique table (see Figure 5). <lb/>Figure 5: Aggregate table. <lb/>The ALM calculations launched on such a table enables one to get N × P outcomes <lb/> NPV p <lb/> 0  + (ε  n  ) <lb/> n∈1;N,p∈1;P <lb/> , <lb/> and subsequently a N sample <lb/> NAV  0  + (ε  n  ) = <lb/> 1 <lb/> P <lb/> P <lb/> ∑ <lb/> p=1 <lb/> NPV p <lb/> 0  + (ε  n  ) <lb/> n∈1;N <lb/> . <lb/> Then, the outcomes <lb/> NAV  0  + (ε  n  ) <lb/> n∈1;N <lb/> are regressed on simple and crossed <lb/>monomials of the risk factors in ε = <lb/>  1  ε, ...,  J  ε <lb/> . The regression is made by Ordinary <lb/>Least Squares (OLS) and the optimal regressors x = <lb/> Intercept, 1  X, ...,  K x <lb/> (with, for all <lb/> k ∈ 1; K, k x = ∏  J <lb/>j=1 <lb/>  j  ε  k   α  j  , for a certain J-tuple (α  1  , ..., α  J  ) in N  J  ) are selected using <lb/>a stepwise methodology. For more developments about these approaches see Draper <lb/>and Smith [13] or Hastie et al. [16]. <lb/>Let β = <lb/>  Int  β ,  1  β , ...,  K  β <lb/> be the optimal multilinear regression parameters. <lb/>

			<page>14 <lb/></page>

			The considered multilinear regression can therefore be written under a matricial <lb/>form Y = Xβ +U, denoting by <lb/> Y = <lb/>  <lb/>  <lb/> <lb/> NAV  0  + <lb/> ε  1  <lb/> . . . <lb/> NAV  0  + <lb/> ε  N  <lb/> <lb/> <lb/> , <lb/> X = <lb/>  <lb/> <lb/> <lb/> x  1 <lb/> . . . <lb/> x N <lb/>  <lb/> <lb/> , <lb/> with, for all n ∈ 1; N, x n  = <lb/> 1, 1  x n  , ...,  K x n  <lb/> , for all k ∈ (|1; K|), k x n  = ∏  J <lb/>j=1 <lb/>  j  ε  n   α  j <lb/> and U = Y − Xβ . <lb/>In this regression, the conditional expectation of NAV  0  + (ε  n  ) given the σ -field gen-<lb/>erated by the regressors matrix X is simply seen as a linear combination of the regres-<lb/>sors. For more insight about multiple regression models the reader may consult Saporta <lb/>[30]. <lb/>The underlying assumption of this model can also be written ∃β , E [Y |X] = Xβ . <lb/>Under the assumption that X   X is invertible (with Z   the transposition of a given <lb/>vector or matrix Z), the estimated vector of the parameters is, <lb/>ˆ <lb/> β = (X   X)  −1  X   Y. <lb/> Moreover, for a given economic transition ¯ <lb/> ε and its associated set of optimal regres-<lb/>sors ¯ <lb/> x, ¯ <lb/> x. ˆ <lb/> β is an unbiased and consistent estimator of E <lb/> NAV  0  + (¯ ε) | ¯ <lb/> x <lb/> = E [NAV  0  + (¯ ε) | ¯ <lb/> x]. <lb/> When σ (x) = F  0  +  , which is generally the case in practice, ¯ <lb/> x ˆ <lb/> β is an efficient estimator <lb/>of NAV  0  + (ε) and we get an efficient polynomial proxy of the central NAV for every <lb/>economic transition. <lb/>Eventually, it is necessary to test the goodness of fit. The idea is now to calculate <lb/>several approximate outcomes of central NAV , associated to an out of sample  6 set of <lb/>economic transition, using a Monte Carlo method on a great number of secondary <lb/>scenarios, and to compare these outcomes to those obtained using the proxy. <lb/> 3.2.5 Implementation — Replication of the shocked NAV <lb/> At each monitoring date, we aim at knowing each pertinent marginal SCR value, for <lb/>each chosen risk modules. With the proxy calibrated in the previous section one can <lb/>calculate an approximate value of the central NAV . We now have to duplicate the <lb/>methodology presented in Subsection 3.2.4, adapted for each marginally shocked NAV <lb/> (considering the Standard Formula shocks). 7 <lb/> The implementation is fully similar except the fact that the shocked proxies are <lb/>calibrated on N outcomes of marginally shocked NAV  0  +  . Indeed each marginal SCR is <lb/>a difference between the central NAV and a NAV after the application of the marginal <lb/> 
			
			<note place="footnote">6 Scenarios that are not included in the set used during the calibration steps. <lb/></note> 
			
			<note place="footnote">7 Note that it is necessary to calibrate new &quot; after shock &quot; proxies because it is impossible to assimilate a <lb/>Standard Formula shock to a transition shock... <lb/></note>

			<page> 15 <lb/></page>

			shock. We therefore need the NAV after shock that takes the conditions associated to <lb/>an economic trasition into account. <lb/>This enables one to obtain, for each shock nb i, a set <lb/> NAV shock nb i <lb/> 0  + <lb/> (ε  n  ) <lb/> n∈1;N <lb/> , <lb/>a new optimal regressors set <lb/> Intercept, 1  x shock nb i  , ...,  K x shock nb i  <lb/> and new optimal pa-<lb/>rameters estimators <lb/> β  shock nb i  . <lb/> 3.2.6 Practical monitoring <lb/> Once the methodology has been implemented, the obtained polynomial proxies enable <lb/>one, at each date within the monitoring period, to evaluate the central and shocked NAV <lb/> values knowing the realized economic transition. <lb/>At each monitoring date t, the process is the following. <lb/> • Assessment of the realized transition between 0 and t, ˆ <lb/> ε. <lb/> • Derivation of the values of the optimal regressors set for each proxy: <lb/> – ¯ <lb/> x the realized regressors set for the central proxy, <lb/> – ¯ <lb/> x shock nb 1  , ..., ¯ <lb/> x shock nb J  the regressors set for the J shocked proxy. <lb/> • Calculation of the approximate central and shocked NAV levels at date t: <lb/> – ¯ <lb/> x ˆ <lb/> β , the approximate central NAV , <lb/> – ¯ <lb/> x shock nb 1 ˆ <lb/> β  shock nb i  , ..., ¯ <lb/> x shock nb J  ˆ <lb/> β  shock nb J  the J approximate shocked NAV . <lb/> • Calculation of the approximate marginal SCR and, considering frozen values, <lb/>or values that are updated proportionally to a volume measure, for the other <lb/>marginal SCR, Standard Formula aggregation to evaluate the approximate overall <lb/> SCR and SR  8 . <lb/> 3.3 Least-Squares Monte Carlo vs. Curve Fitting — The large di-<lb/>mensioning issue <lb/> The implementation developed in Subsection 3.2 is an adapted application, generalized <lb/>to the N × P framework, of the polynomial approaches such as LSMC and CF, already <lb/>used in previous studies to project NAV values at t years (t ≥ 1). For more insight <lb/>about these approaches, see for example Vedani and Devineau [32], Algorithmics [1] <lb/>or Barrie &amp; Hibbert [17]. <lb/>When P = 1 and N is very large (basically the proxies are calibrated on Net Present <lb/>Values of margins / NPV ), we are in the case of a LSMC approach. On the contrary, <lb/>when N is rather small and P large, we are in the case of a CF approach. <lb/>Both approaches generally deliver similar results. However the LSMC is often seen <lb/>as more stable than a CF when a large number of regressors are embedded in the <lb/>proxy. This clearly matches the continuous compliance case where the user generally <lb/>considers a larger number of risk factors compared to the usual LSMC methodologies, <lb/> 
			
			<note place="footnote">8 For more insight concerning the Standard Formula aggregation, especially about the evaluation of the <lb/>differed taxes, see Subsection 4.1.5. <lb/></note>

			<page> 16 <lb/></page>

			used to accelerate Nested Simulations for example. In our case, this large dimensioning <lb/>issue makes a lot of sense. <lb/>In Section 4 we will apply the methodology on four distinct risk factors, the stock <lb/>level risk, the interest rates level risk, the widening of corporates spread and of sovereign <lb/>spread risks. We have chosen to implement this application using a LSMC method. In <lb/>Section 5 we eventually try to challenge the commonly agreed idea that this methodol-<lb/>ogy is more robust than CF in a large dimension context. <lb/> 4 LSMC approach adapted to the continuous compli-<lb/>ance issue <lb/> In Section 4 we will implement the presented methodology, in a standard savings prod-<lb/>uct framework. The ALM model used for the projections takes profit sharing mech-<lb/>anisms, target crediting rate and dynamic lapses behaviors of policy holders into ac-<lb/>count. Its characteristics are similar to those of the model used in Section 5 of Vedani <lb/>and Devineau [32]. The economic assumptions are those of 31/12/2012. <lb/> 4.1 Implementation of the monitoring tool — Initialization step <lb/>and proxies calibration <lb/> Firstly it is necessary to shape the exact framework of the study. We have to select <lb/>the significant risks to be monitored, to choose representative indexes and then to iden-<lb/>tify the risk modules that will be updated. Note that the other risk modules will be <lb/>considered frozen through the monitoring period. <lb/>The monitoring period must be chosen short enough to ensure a good validity of <lb/>our stability assumptions for the risk modules that are not updted and for the balance <lb/>sheet composition. However, it also defines the time during two complete proxy cali-<lb/>brations and, as a consequence, it must be chosen long enough not to force too frequent <lb/>calibrations, which are highly time-consuming. In this study we have therefore chosen <lb/>to consider a quarterly monitoring period. <lb/> 4.1.1 Initialization step — Implementation of a complete regulatory solvency <lb/>calculation <lb/> In order to quantify the relative relevance of the various marginal SCR of the Standard <lb/>Formula, it is recommended to implement, as a preliminary step, a complete regulatory <lb/>solvency calculation before a calibration of the monitoring tool. Moreover, seen as an <lb/> out of sample scenario, this central calculation can be used as a validation point for the <lb/>calibrated proxies. 9 <lb/> It is also possible to select the marginal SCR based on expert statements or on the <lb/>undertaking&apos;s expertise, knowing the products sensitivities to the various shocks and <lb/>economic cycles at the calibration date (and the previous SCR calculations). <lb/> 
			
			<note place="footnote">9 The implementation of two to four complete regulatory solvency calculations may be a strong constraint <lb/>for most insurance undertakings however, due to the several assumptions made to implement the monitoring <lb/>tool, we recommend to consider monitoring period no longer than six months. <lb/></note>

			<page> 17 <lb/></page>

			4.1.2 Initialization step — Risk factor and monitored indexes selection <lb/> We have selected four major risks and built the following indexes table. <lb/>Table 2: Selected risks and associated indicators. <lb/> Selected risks <lb/>Composite indicators <lb/> Stock (level) <lb/>100% EUROSTOXX50 <lb/>Risk-free rate (level) Euro swap curve (averaged level evolution) <lb/>Spread (sovereign) <lb/>Average spread French bonds rate vs. Euro swap rate <lb/>Spread (corporate) <lb/>iTraxx Europe Generic 10Y Corporate <lb/>These four risks generally have a great impact on the NAV and SCR in the case <lb/>of savings products, even on a short monitoring period. Moreover, they are highly <lb/>volatile at the calibration date (31/12/12). In particular, the division of the spread risk <lb/>in two categories (sovereign and corporate) is absolutely necessary within the European <lb/>sovereign debt context. <lb/>A wide range of risks have been set aside of this study that is just intended to be <lb/>a simple example. In practice both the stock and interest rates implicit volatility risks <lb/>are also relevant risks that can be added in the methodology&apos;s implementation with <lb/>no major issue. For the stock implicit volatility risk it is possible to monitor market <lb/>volatility indexes such as the VIX. Note that the interest rates implicit volatility risk <lb/>raises several questions related to the application of the risk in the instant economic <lb/>transitions, in the calibration scenarios. These issues can be set aside considering re-<lb/>calibration/regeneration approaches (see Devineau [9]) and will not be discussed in this <lb/>paper. <lb/> 4.1.3 Initialization step — Choice of the monitored marginal SCR <lb/> Considering the updated risk modules to update, we have chosen the most significant in <lb/>the Standard Formula aggregation process. These are also the less stable trough time, <lb/> • the stock SCR, <lb/> • the interest rates SCR, <lb/> • the spread SCR, <lb/> • the liquidity SCR. <lb/> The lapse risk SCR, generally greatly significant, has not been considered here. <lb/>Indeed with the very low rates, as at 31/12/2012, the lapse risk SCR is close to zero. <lb/>Certain other significant SCR sub-modules such as the real estate SCR have been omit-<lb/>ted because of their low infra-year volatility. <lb/> 4.1.4 Proxies calibration and validation <lb/> The calibration of the various proxies is made through the same process as developed <lb/>in Vedani and Devineau [32]. The proxy is obtained by implementing a standard OLS <lb/>

			<page> 18 <lb/></page>

			Table 3: Market marginal SCR as at 31/12/2012. <lb/> Market SCR <lb/> Value as at 31/12/2012 <lb/> IR SCR <lb/> 968 <lb/>Stock SCR <lb/> 3930 <lb/>Real Estate SCR <lb/> 943 <lb/>Spread SCR <lb/> 2658 <lb/>Liquidity SCR <lb/> 3928 <lb/>Concentration SCR <lb/> 661 <lb/>Currency SCR <lb/> 127 <lb/>... <lb/>... <lb/>methodology and the optimal regressors are selected through a stepwise approach. This <lb/>enables the process to be completely automated. The validation of each proxy is made <lb/>by considering ten out of the sample scenarios. These are scenarios that have not be <lb/>used to calibrate the proxies but on which we have calculated shocked and central <lb/>outcomes of <lb/> NAV  0  +  . These &quot; true &quot; outcomes are then compared to the approximate <lb/>outcomes obtained from our proxies. <lb/>To select the out of the sample scenarios we have chosen to define them as the 10 <lb/>scenarios that go step by step from the &quot; initial &quot; position to the &quot; worst case &quot; situation <lb/>(the calibrated worst case limit of the monitored risks). <lb/>For each risk factor ε  j  , <lb/> • the &quot; initial &quot; position is ε  j <lb/>init  = 0, <lb/> • the &quot; worst case &quot; situation 10 is ε  j <lb/>w.c.  = q  1−α% <lb/>2 <lb/> ε  i <lb/> 4 <lb/> i∈0;T  <lb/> or q  1+α% <lb/>2 <lb/> ε  i <lb/> 4 <lb/> i∈0;T  <lb/> , <lb/>depending on the direction of the worst case for each risk, <lb/> • the k th  (k ∈ 1; 9) out of sample scenario is ε  j <lb/>nb. k  =  k <lb/> 10  ε  j <lb/>w.c.  +  10−k <lb/>10  ε  j <lb/>init  . <lb/>Below are shown the relative deviations, between the proxies outcomes and the <lb/>corresponding out of sample fully-calculated scenarios, obtained on the first five val-<lb/>idation scenarios. As one can see, the relative deviations are always close to 0 apart <lb/>from the illiquidity shocked NAV proxy. In practice this proxy is the most complex to <lb/>calibrate due to the high volatility of the illiquidity shocked NAV . To avoid this issue, <lb/>the user can add more calibration scenarios or select more potential regressors when <lb/>implementing the stepwise methodology. In our study we have chosen to validate our <lb/>proxy, staying critical on the underlying approximate marginal SCR illiquidity. <lb/>All the proxies being eventually calibrated and validated, it is now necessary to <lb/>rebuild the Standard Formula aggregation process in order to assess the approximate <lb/>overall SCR value. <lb/> 4.1.5 Proxies aggregation through the Standard Formula process <lb/> In practice the overall SCR is calculated as an aggregation of three quantities, the BSCR, <lb/> the operational SCR (SCRop) and the tax adjustments (Ad j). <lb/> 
			
			<note place="footnote">10 = the 10  th out of sample scenario <lb/></note>

			<page> 19 <lb/></page>

			Table 4: Relative deviations proxies vs. full-calculation NAV (check on the five first <lb/>validation steps). <lb/> Validation scenarios <lb/>1 <lb/>2 <lb/>3 <lb/>4 <lb/>5 <lb/> Central NAV <lb/> -0.07% <lb/>1.65% <lb/>1.56% <lb/>1.05% <lb/>0.29% <lb/>IR shocked NAV <lb/> -0.18% <lb/>1.67% <lb/>1.14% <lb/>0.44% -0.83% <lb/> &quot; Global &quot; Stock shocked NAV <lb/> 0.24% <lb/>1.93% <lb/>1.56% <lb/>1.15% <lb/>0.28% <lb/> &quot; Other &quot; Stock shocked NAV <lb/> 0.19% <lb/>1.95% <lb/>1.78% <lb/>1.31% <lb/>0.27% <lb/>Spread shocked NAV <lb/> 0.01% <lb/>2.29% <lb/>2.15% <lb/>1.06% <lb/>0.18% <lb/>Illiquidity shocked NAV <lb/> -5.35% -3.27% -2.43% -3.03% -2.39% <lb/>As far as the BSCR is concerned, no particular issue is raised by its calculation. <lb/>At each monitoring date, the selected marginal SCR are approximated using the prox-<lb/>ies and the other SCR are assumed frozen. The BSCR is simply obtained through a <lb/>Standard Formula aggregation (see for example Devineau and Loisel [11]). <lb/>To derive the operational SCR, we consider that this capital is also stable through <lb/>time, which is in practice an acceptable assumption for a halfyearly or quarterly mon-<lb/>itoring period (and consistent with the asset and liability portfolios stability assump-<lb/>tion). <lb/>The Tax adjustments approximation leads to the greatest issue. Indeed we need <lb/>to know the approximate Value of In-Force (V IF) at the monitoring date. We obtain <lb/>the approximate V IF as the approximate central NAV <lb/> NAV <lb/> central proxy  <lb/> minus a fixed <lb/>amount calculated as the sum of the tier-one own funds (tier one OF) and of the sub-<lb/>ordinated debt (SD) minus the financial management fees (FMF), as at the calibration <lb/>date. Let t be the monitoring date and 0 be the proxies&apos; calibration date (t &gt; 0), <lb/> V IF t  ≈ NAV central proxy <lb/>t <lb/> − (tier one OF  0  + SD  0  − FMF  0  ). <lb/> Assuming a frozen corporation tax rate of 34.43% (French corporation tax rate), <lb/>the approximated level of deferred tax liability <lb/> DT L is obtained as, <lb/> DT L t  = 34.43% × <lb/> V IF t  . <lb/>Eventually, the income tax recovery associated to new business <lb/> IT R NB  <lb/> is as-<lb/>sumed frozen through the monitoring period and the approximate tax adjustments at <lb/>the monitoring date is obtained as, <lb/> Ad j t  = IT R NB <lb/> 0  + <lb/> DT L t  . <lb/>Knowing the approximate values <lb/> BSCR t  and <lb/> Ad j t  , and the initial value SCRop  0 , <lb/>one can obtain the approximate overall SCR (simply denoted by <lb/> SCR) at the monitoring <lb/>date as, <lb/> SCR t  = <lb/> BSCR t  + SCRop  0  − <lb/> Ad j t  . <lb/>Eventually, in order to obtain the SR approximation we obtain the approximate <lb/>eligible own funds <lb/> OF as, <lb/>
			
			<page>20 <lb/></page>
			
			OF  t  = (tier one OF  0  + SD  0  − FMF  0  ) + <lb/> V IF t  × (1 − 34.43%). <lb/>Eventually, the approximate SR at the monitoring date is, <lb/> SR t  = <lb/> OF t <lb/> SCR t <lb/> . <lb/> 4.2 Practical use of the monitoring tool <lb/> In subsection 4.2 we will first see the issues raised by the practical continuous compli-<lb/>ance&apos;s monitoring through our tool, and the tool&apos;s governance. In a second part we will <lb/>develop the other possible uses of the monitoring tool, especially in the area of the risk <lb/>management and for the development of preventive measures. <lb/> 4.2.1 Monitoring the continuous compliance <lb/> At each monitoring date the process to assess the regulatory compliance is the same as <lb/>presented in Subsection 3.2.6. <lb/> • Assessment of the realized transition between 0 and t, ˆ <lb/> ε. <lb/> • Derivation of the values of the optimal regressors set for each proxy. <lb/> • Derivation of the values of the optimal regressors set for each proxy. <lb/> • Calculation of the approximate central and shocked NAV levels at date t. <lb/> • Calculation of the levels of each approximate marginal SCR at date t (the other <lb/>marginal SCR are assumed frozen through the monitoring period). <lb/>This, with other stability assumptions such as stability of the tax rate and of the <lb/>tier-one own funds, enables one to reconstruct the Basic SCR, the operational SCR and <lb/>the Tax adjustments and, eventually, to approximate the overall SCR and the SR at the <lb/>monitoring date. <lb/>Note that this process can be automated to provide a monitoring target such as the <lb/>one depicted below and a set of outputs such as the eligible own funds, the overall SCR, <lb/> the SR, but also the various marginal SCR (see Figure 6). <lb/> 4.2.2 Monitoring the daily evolution of the SR <lb/> In practice the ability to monitor the SR day by day is very interesting and provides a <lb/>good idea of the empirical volatility of such a ratio (see Figure 7). <lb/>In particular, in an ORSA framework it could be relevant to consider an artificially <lb/>smoothed SR, for example using a 2-week moving average, in order to depict a more <lb/>consistent solvency indicator. Considering the same data as presented in the previous <lb/>figure we would obtain the following two graphs (see Figure 8). <lb/>

			<page>21 <lb/></page>

			Figure 6: Target used to monitor the evolution of the risk factors <lb/>Figure 7: Monitoring of the approximate SR and of the four underlying risk factors, <lb/>from 30/06/12 to 30/06/13. <lb/> 4.2.3 Monitoring tool governance <lb/> Several assumptions are made to provide the approximate SR but we can observe in <lb/>practice a good replication of the risk impacts and of the SR variations. However the <lb/>use of this monitoring tool only provides a proxy and therefore the results must be used <lb/>

			<page>22 <lb/></page>

			Figure 8: Comparison of the standard approximate SR and of a smoothed approximate <lb/> SR -Monitoring from 30/06/12 to 30/06/13. <lb/>with caution and its governance must be managed very carefully. <lb/>The governance of the tool can be divided into three parts. <lb/> • Firstly it is necessary to a priori define the recalibration frequency. The monitor-<lb/>ing period associated to each total calibration of the tool should not be too long. <lb/>The authors believe it should not exceed half a year. <lb/> • Secondly it is important to identify clearly the data to update for each recalibra-<lb/>tion. These data especially cover the asset and liability data. <lb/> • Finally the user must define the conditions leading to a total (unplanned) recal-<lb/>ibration of the tool. In particular, these conditions must include updates follow-<lb/>ing management decisions (financial strategy changes inside the mode, asset mix <lb/>changes,...) and updates triggered by the evolution of the economic situation. <lb/> 4.2.4 Alternative uses of the tool <lb/> This monitoring tool enables the risk managers to run a certain number of studies, even <lb/>at the beginning of the monitoring period, in order ton anticipate the impact of future <lb/>risk deviations for example. <lb/> Sensitivity study and stress testing. The parametric proxy that replicates the central <lb/> NAV can also be used to stress the marginal and joint sensitivities of the NAV to the <lb/>various risks embedded in our proxies. Even more interesting for the risk managers, <lb/>it is possible to assess a complete sensitivity study directly on the SR of the company, <lb/>which is very difficult to compute without using an approximation tool (see Figures 9 <lb/>and 10). <lb/>This sensitivity analyses needs no additional calculations to the proxies&apos; assess-<lb/>ment and enables the risk managers to compute as many &quot; approximate &quot; stress tests as <lb/>needed. In practice such a use of the tool enables to gain better insight about the impact <lb/>of each risk, taken either individually or jointly, on the SR. <lb/>

			<page> 23 <lb/></page>

			Figure 9: 1D solvency ratio sensitivities. <lb/>Figure 10: 2D solvency ratio sensitivities. <lb/> Monitoring the marginal impacts of the risks and market anticipations. Using <lb/>our monitoring tool it is possible to trace the evolution of the SR risk after risk (only <lb/>for the monitored risks). Figures 11 and 12 correspond to a ficticious evolution of the <lb/>risks implemented between the calibration date and a &quot; virtual &quot; monitoring date). <lb/>Such a study can be run at each monitoring date, or on fictitious scenarios (e.g. mar-<lb/>ket anticipations), in order to provide better insight about the SR movements through <lb/>time. <lb/>Concerning market anticipations, if a risk manager anticipates a rise or a fall of <lb/>the stocks / interest rates / spread, he can directly, through our tool, evaluate the corre-<lb/>

			<page>24 <lb/></page>

			Figure 11: Monitoring target after the ficticious evolution of the monitored risks. <lb/>Figure 12: Marginal impact of the risks on the SR between the calibration date and a <lb/>virtual monitoring date. <lb/>sponding impact on the undertakings SR. In particular, such a study can be relevant to <lb/>propose quantitative preventive measures. <lb/>

			<page>25 <lb/></page>

			In practice it also seems possible for the user to add asset-mix weights in the moni-<lb/>tored risk factors set. This would enable the user to a priori test asset-mix rebalancing <lb/>possibilities in order to select specific preventive measures and prevent the worse mar-<lb/>ket anticipation. This implementation has not been done yet but will be part of the <lb/>major future developments of the monitoring tool. <lb/> 5 Empirical comparison between LSMC and CF <lb/> In Section 5 we will try to challenge the generally agreed idea that the LSMC method-<lb/>ology is more robust than CF in a large dimension context. We will also consider <lb/>the various possibilities to build asymptotic confidence intervals for the obtained NAV <lb/> estimators using our polynomial proxies. <lb/> 5.1 Mathematical formalism — Probabilistic vs. statistic frame-<lb/>work <lb/> Through Subsection 5.1, we intend to describe a general polynomial proxy framework, <lb/>calibrated at a given date t (t ≥ 1 as in Vedani and Devineau [32] for example, but also <lb/> t = 0  +  which is the case in the framework developed and implemented in Sections 3 <lb/>and 4). We will then discriminate two distinct regressions (LSMC and CF) that can be <lb/>apprehended in a probabilistic or a statistic framework. <lb/>In a probabilistic framework, we note NAV t  the Net Asset Value at date t, NPV t  the <lb/>Net present Value of margins variable seen at date t and <lb/> NAV t  the estimated variable <lb/>considered in the CF regression. These variables are P ⊗ Q  t  -measurable, denoting by <lb/> P ⊗ Q  t  the probability measure as introduced in Section 4 of Vedani and Devineau [32]. <lb/>The indexation by P ⊗ Q  t  will be omitted for the sake of simplicity. Finally, we note <lb/> F  RW <lb/>t <lb/> the filtration that characterizes the real-world economic information contained <lb/>between 0 and t, R u  the profit realized between dates u − 1 and u ≥ 1, and δ  u  the <lb/>discount factor at date u. <lb/> We have, <lb/> NPV t  = ∑  t+H <lb/>u=1 <lb/> δ  u <lb/> δ  t <lb/> R u <lb/> and denoting NPV  1 <lb/> t  , ..., NPV P <lb/>t  , P i.i.d. random variables conditionally to F  RW <lb/>t  , <lb/>which follow the same probability distribution as NPV t  , conditionnaly to F  RW <lb/>t  , <lb/> NAV t  =  1 <lb/> P  ∑  P <lb/>p=1 NPV p <lb/>t  , <lb/>for a chosen P number of secondary scenarios. Moreover, <lb/> NAV t  = E <lb/> NAV t  |F  MR <lb/>t <lb/> = E <lb/> NPV t  |F  MR <lb/>t <lb/> . <lb/>Denoting by x t  the chosen regressors random vector (intercept included), 1  β (resp. <lb/> 2  β ) the true value of the CF (resp. LSMC) regression parameters, and 1  u t  (resp. 2  u t  ) <lb/>the residual of the CF (resp. LSMC) regression, both considered regressions can be <lb/>written as follow. <lb/>

			<page>26 <lb/></page>

			(regression 1 −CF) <lb/> NAV  t  = x t <lb/> 1  β +  1  u t <lb/> under the assumption E <lb/> NAV t  |x  t <lb/> = x t <lb/> 1  β <lb/> (regression 2 − LSMC) NPV t  = x t <lb/> 2  β +  2  u t  under the assumption E [NPV  t  |x  t  ] = x t <lb/> 2  β <lb/> Note that this probabilist framework is the one chosen in Monfort [25]. <lb/>Moreover, as seen in Vedani and Devineau [32] and Kalberer [20], we have 1  β =  2  β <lb/> (= β , in the rest of this paper). <lb/>In a statistical framework one will first consider the samples used for the model <lb/>calibration. As a consequence, let 1  N (resp. 2  N) be the length of the calibration <lb/>sample used in a CF (resp. LSMC) regression, <lb/>  1  x n <lb/>t <lb/> n∈1;  1  N <lb/> (resp. <lb/>  2  x n <lb/>t <lb/> n∈1;  2  N <lb/> ) <lb/>the x t  outcomes, <lb/> NAV <lb/> n <lb/>t <lb/> n∈1;  1  N <lb/> (resp. (NPV  n <lb/>t  )  n∈1;  2  N  ) the associated <lb/> NAV t  (resp. <lb/> NPV t  ) outcomes 11 and <lb/>  1  u n <lb/>t <lb/> n∈1;  1  N <lb/> (resp. <lb/>  2  u n <lb/>t <lb/> n∈1;  2  N <lb/> the associated residuals. Note <lb/>that in order to compare the relative efficiency of both approaches we will obviously <lb/>have to consider an equal algorithmic complexity of the two approaches, which means <lb/> 2  N =  1  N × P. <lb/> In a statistical matrix framework we have, <lb/> (regression 1 −CF)  1  Y t  =  1  X t <lb/> 1  β +  1  U t <lb/> under the assumption E <lb/>  1  Y t  |  1  X t <lb/> =  1  X t <lb/> 1  β <lb/> (regression 2 − LSMC)  2  Y t  =  2  X t <lb/> 2  β +  2  U t  under the assumption E <lb/>  2  Y t  |  2  X t <lb/> =  2  X t <lb/> 2  β <lb/> Denoting, <lb/> 1 <lb/> Y t  = <lb/>  <lb/> <lb/> <lb/> <lb/> NAV <lb/> 1 <lb/> t <lb/> . . . <lb/> NAV <lb/> 1  N <lb/>t <lb/>  <lb/> <lb/> <lb/> and 2  Y t  = <lb/>  <lb/> <lb/> <lb/> NPV  1 <lb/> t <lb/> . . . <lb/> NPV <lb/> 2  N <lb/>t <lb/>  <lb/> <lb/> , <lb/> 1 <lb/> X t  = <lb/>  <lb/> <lb/> <lb/> 1  x  1 <lb/> t <lb/> . . . <lb/> 1  x <lb/> 1  N <lb/>t <lb/>  <lb/> <lb/> and 2  X t  = <lb/>  <lb/> <lb/> <lb/> 2  x  1 <lb/> t <lb/> . . . <lb/> 2  x <lb/> 2  N <lb/>t <lb/>  <lb/> <lb/> , <lb/> 1 <lb/> U t  = <lb/>  <lb/> <lb/> <lb/> 1  u  1 <lb/> t <lb/> . . . <lb/> 1  u <lb/> 1  N <lb/>t <lb/>  <lb/> <lb/> and 2  U t  = <lb/>  <lb/> <lb/> <lb/> 2  u  1 <lb/> t <lb/> . . . <lb/> 2  u <lb/> 2  N <lb/>t <lb/>  <lb/> <lb/> . <lb/> For example, this statistical framework is the one developed in Crpon and Jacquemet <lb/>[8]. <lb/>As the study goes forward and for the sake of simplicity, the time index will be <lb/>omitted. <lb/> 
			
			<note place="footnote">11 For a given value of P for the simulation of the <lb/> NAV <lb/> n <lb/>t <lb/> n∈1;  1  N <lb/> sample. <lb/></note>

			<page> 27 <lb/></page>

			5.2 Comparison tools in a finite sample framework <lb/> In Subsection 5.2 we determine comparison elements to challenge the comparative <lb/>efficiency of the CF and LSMC estimators, based on standard finite sample econometric <lb/>results. We will see below that, in the general case, it is necessary to study first the <lb/>properties of the residuals covariance matrices. Note that we will now consider the <lb/>regressions in a statistical vision, more representative of the finite sample framework, <lb/>and two assumptions will be made, which are verified in pratice. <lb/> H  0 : The <lb/> NAV <lb/> i <lb/> ,  1  x i <lb/> (resp. (NPV  i  ,  2  x i  )) outcomes are i.i.d. <lb/> H  1 : The matrix 1  X  1  X (resp.  2  X  2  X) is invertible. <lb/>Under these assumptions, the OLS parameters estimators are respectively, <lb/> 1 ˆ <lb/> β = <lb/>  1  X  1  X <lb/>  −1   1  X  1  Y <lb/> and 2 ˆ <lb/> β = <lb/>  2  X  2  X <lb/>  −1   2  X  2  Y <lb/> These two estimators are consistent and unbiased. <lb/>In the following subsections we will introduce two comparison tools for the LSMC <lb/> and CF methodologies in a finite sample framework: estimators of the parameters <lb/>covariance matrices and asymptotic confidence intervals. <lb/>As far as the estimated covariance matrices are concerned, it is complicated to use <lb/>them to compare models exept when the eigenvalues of one matrix are all inferior to <lb/>the eigenvalues of the other one. In this case the partial order on the hilbertian matrices <lb/>tells us that the methodology leading to the first matrix is better (see Horn and Johnson <lb/>[18] for more insight). However, this seldom happens in practice. <lb/>As far as the asymptotic confidence intervals are concerned, we are able to compare <lb/>the length of these intervals, obtained on the same set of primary scenarios, for thê <lb/> β <lb/> estimated with the two different methodologies. If one methodology leads to smaller <lb/>lengths than the other, it is the better one. This is the approach we will use in our <lb/>empirical tests. <lb/> 5.2.1 Estimators covariance matrices under an homoskedasticity assumption <lb/> In Subsection 5.2.1 we add an homoskedasticity assumption for the residuals of both <lb/>models, <lb/> H  2 : V <lb/>  1  U|  1  X <lb/> =  1  σ  2  .I1  N  and V <lb/>  2  U|  2  X <lb/> =  2  σ  2  .I2  N  , <lb/>denoting by I N  the identity matrix with rank N. <lb/> H  2 can be operationally tested using an homoskedasticity test such as the Breusch <lb/>and Pagan [3], the White [34] or the Goldfeld — Quandt [14] test. This assumption <lb/>makes the Gauss — Markov theorem applicable (see Plackett [29]) and the OLS esti-<lb/>mators are the Best Linear Unbiased Estimators (BLUE). This means that considering <lb/>the same calibration samples it is impossible to find less volatile estimators than the <lb/> OLS ones. Under this assumption it is also easy to assess the estimators&apos; covariance <lb/>

			<page>28 <lb/></page>

			matrices, conditionally to the explicative variables, <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/> = <lb/>  1  X  1  X <lb/>  −1   1  X   V <lb/>  1  Y |  1  X <lb/>  1  X <lb/>  1  X  1  X <lb/>  −1  , <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/> = <lb/>  1  X  1  X <lb/>  −1   1  X   V <lb/>  1  U|  1  X <lb/>  1  X <lb/>  1  X  1  X <lb/>  −1  , <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/> =  1  σ  2   1  X  1  X <lb/>  −1  . <lb/> And, similarly, V <lb/> 2 ˆ <lb/> β |  2  X <lb/> =  2  σ  2   2  X  2  X <lb/>  −1  . <lb/>Moreover, we can express consistent and unbiased estimators of 1  σ  2 and 2  σ  2 . Let <lb/> K + 1 be the dimention of x, these estimators are respectively, <lb/> 1 ˆ <lb/> σ  2  = <lb/> 1 <lb/> 1  N−K−1  ∑ <lb/> 1  N <lb/>n=1 <lb/> 1 ˆ <lb/> u n  2 and 2 ˆ <lb/> σ  2  = <lb/> 2 <lb/> 2  N−K−1  ∑ <lb/> 2  N <lb/>n=1 <lb/> 2 ˆ <lb/> u n  2 , <lb/>with 1 ˆ <lb/> u n  = <lb/> NAV <lb/> n  −  1  x n1  <lb/> β and 2 ˆ <lb/> u n  = NPV n  −  2  x n2  <lb/> β , the empirical residuals of regres-<lb/>sions 1 and 2. <lb/>We therefore get two unbiased estimators of the previously given conditional co-<lb/>variance matrices, <lb/>ˆ <lb/> V <lb/>  1  β |  1  X <lb/> =  1 ˆ <lb/> σ  2   1  X  1  X <lb/>  −1  and <lb/> V <lb/>  2  β |  2  X <lb/> =  2 ˆ <lb/> σ  2   2  X  2  X <lb/>  −1  . <lb/>Eventually we have the two following convergences in distribution, <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/>  −  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> and V <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/>. <lb/> Moreover, using the previously given estimators and adding simple assumptions on <lb/>the first moments of the regressors (generally verified in practice), we have, <lb/>ˆ <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/>  −  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> and <lb/> V <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/>. <lb/> 5.2.2 Comparison between the estimators covariance matrices without the ho-<lb/>moskedasticity assumption <lb/> In practice it is unusual to observe homoskedastic residuals. We now suppress H  2 in <lb/>order to consider a more robust framework. Note first that, in the heteroskedastic case, <lb/>the OLS estimators are no longer the BLUE. <lb/>

			<page> 29 <lb/></page>

			Moreover, in this new framework we do not have a simple form for the estimators&apos; <lb/>covariance matrices any more, <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/> = <lb/>  1  X  1  X <lb/>  −1   1  X   V <lb/>  1  U|  1  X <lb/>  1  X <lb/>  1  X  1  X <lb/>  −1 <lb/> andV <lb/> 2 ˆ <lb/> β |  2  X <lb/> = <lb/>  2  X  2  X <lb/>  −1   2  X   V <lb/>  2  U|  2  X <lb/>  2  X <lb/>  2  X  1  X <lb/>  −1  . <lb/> However, we still have the two following convergences in distribution, <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/>  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> and V <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> White [34] proposes the use of a biased estimator of the residuals variance, in <lb/>the case of independent calibration samples. In our case, it aims at resorting to the <lb/>following estimators, <lb/>ˆ <lb/> V <lb/>  1  U|  1  X <lb/> = <lb/>  <lb/> <lb/> <lb/> <lb/> 1 ˆ <lb/> u  1 2  · · · <lb/> 0 <lb/>. . . <lb/>. . . <lb/>. . . <lb/>0 <lb/> · · ·  1 ˆ <lb/> u <lb/> 1  N <lb/> 2 <lb/>  <lb/> <lb/> <lb/> and <lb/> V <lb/>  1  U|  1  X <lb/> = <lb/>  <lb/> <lb/> <lb/> <lb/> 2 ˆ <lb/> u  1 2  · · · <lb/> 0 <lb/>. . . <lb/>. . . <lb/>. . . <lb/>0 <lb/> · · ·  2 ˆ <lb/> u <lb/> 2  N <lb/> 2 <lb/>  <lb/> <lb/> <lb/> . <lb/>Note that other less biased estimators are proposed in MacKinnon and White [24]. <lb/>These adapted estimators are less used in practice and will not be considered in this <lb/>paper. This new data enables one to assess two biased but consistent estimators of the <lb/>covariance matrices of the OLS estimators, <lb/>ˆ <lb/> V  W hite <lb/> 1 ˆ <lb/> β |  1  X <lb/> = <lb/>  1  X  1  X <lb/>  −1 <lb/>  1  N <lb/> ∑ <lb/> n=1 <lb/> 1 ˆ <lb/> u n  2 1  x n   1  x n <lb/> 1  X  1  X <lb/>  −1 <lb/> and <lb/> V  W hite <lb/> 2 ˆ <lb/> β |  2  X <lb/> = <lb/>  2  X  2  X <lb/>  −1 <lb/>  2  N <lb/> ∑ <lb/> n=1 <lb/> 2 ˆ <lb/> u n  2 2  x n   2  x n <lb/> 2  X  2  X <lb/>  −1 <lb/> Moreover, under simple assumptions concerning the first moments of the regressors <lb/>(generally verified in practice), these estimators enables one to obtain the following <lb/>convergences in distribution, <lb/>ˆ <lb/> V  W hite <lb/> 1 ˆ <lb/> β |  1  X <lb/>  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> and <lb/> V  W hite <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> 
			
			<page>30 <lb/> </page>
			
			To conclude on the heteroskedastic framework, it is important to note that these <lb/>variance-covariance matrices estimators are biased and sometimes very volatile. The <lb/>heteroskedastic framework is more general and robust than the homoskedastic frame-<lb/>work. It is generally more adapted to our proxy methodologies. <lb/> 5.3 Asymptotic confidence intervals <lb/> In practice, the length of the asymptotic confidence intervals given the estimators of <lb/>the covariance matrices are good comparison tools provided by both homoskedastic <lb/>and heteroskedastic frameworks. This subsection describes the construction steps of <lb/>these intervals. <lb/> 5.3.1 Asymptotic confidence intervals under the homoskedasticity assumption <lb/> If H  2 is assumed, an asymptotic confidence interval for the approximate NAV can be <lb/>obtained, using the following convergences in law, <lb/>ˆ <lb/> V <lb/> 1 ˆ <lb/> β |  1  X <lb/>  −  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> and <lb/> V <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/>. <lb/> For the CF regression, the α% (with α% close to 1) asymptotic confidence interval <lb/>obtained from this formula, for ¯ <lb/> x, a given regressors&apos; outcome, is, <lb/> 1  IC <lb/> 1  N <lb/> α%  ( ¯ <lb/> xβ ) = <lb/> ¯ <lb/> x  1 ˆ <lb/> β ±q  1+α% <lb/>2 <lb/> 1 ˆ <lb/> σ  2 <lb/> ¯ <lb/> x (  1  X  1  X)  −1  ¯ <lb/> x  <lb/> , <lb/>and for the LSMC, <lb/> 2  IC <lb/> 2  N <lb/> α%  ( ¯ <lb/> xβ ) = <lb/> ¯ <lb/> x  2 ˆ <lb/> β ±q  1+α% <lb/>2 <lb/> 2 ˆ <lb/> σ  2 <lb/> ¯ <lb/> x (  2  X  2  X)  −1  ¯ <lb/> x  <lb/> , <lb/>denoting by q  1+α% <lb/>2 <lb/> the 1+α% <lb/>2 <lb/> quantile of a standard Gaussian distribution. <lb/> 5.3.2 Asymptotic confidence intervals without the homoskedasticity assumption <lb/> Without the homoskedasticity assumption it is also possible to build asymptotic con-<lb/>fidence intervals, based on Whites estimator [34] properties. Indeed, this estimator <lb/>enables one to assess the following convergence in law, <lb/>ˆ <lb/> V  W hite <lb/> 1 ˆ <lb/> β |  1  X <lb/>  1 <lb/>2 <lb/> 1 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> 
			
			<page>31 <lb/> </page>
			
			and  <lb/> V  W hite <lb/> 2 ˆ <lb/> β |  2  X <lb/>  −  1 <lb/>2 <lb/> 2 ˆ <lb/> β − β <lb/> d <lb/>  N <lb/>  <lb/> <lb/> <lb/> <lb/> <lb/> <lb/> 0 <lb/>. . . <lb/>0 <lb/>  <lb/> <lb/> , I K+1 <lb/>  <lb/> <lb/> <lb/> For the CF regression, the α% (with α% close to 1) asymptotic confidence interval <lb/>obtained from this formula, for ¯ <lb/> x, a given regressors&apos; outcome, is, <lb/> 1  IC <lb/> 1  N <lb/> α%  ( ¯ <lb/> xβ ) = <lb/> ¯ <lb/> x  1 ˆ <lb/> β ±q  1+α% <lb/>2 <lb/> ¯ <lb/> x (  1  X  1  X)  −1 <lb/> ∑ <lb/> 1  N <lb/>n=1 <lb/> 1 ˆ <lb/> u n  2 1  x n   1  x n <lb/> (  1  X  1  X)  −1  ¯ <lb/> x  <lb/> , <lb/>and for the LSMC, <lb/> 2  IC <lb/> 2  N <lb/> α%  ( ¯ <lb/> xβ ) = <lb/> ¯ <lb/> x  2 ˆ <lb/> β ±q  1+α% <lb/>2 <lb/> ¯ <lb/> x (  2  X  2  X)  −1 <lb/> ∑ <lb/> 2  N <lb/>n=1 <lb/> 2 ˆ <lb/> u n  2 2  x n   2  x n <lb/> (  2  X  2  X)  −1  ¯ <lb/> x  <lb/> . <lb/>Consider now a set of N independent outcomes following the same distribution as x, <lb/> ¯ <lb/> x i  <lb/> i∈1;N <lb/> . It is possible to calculate the lengths of the asymptotic confidence intervals <lb/>built for both CF and LSMC, and to compare these values to assess which estimator is <lb/>more efficient in practice (this will be used to see what happens when the number of <lb/>risk factors increases). <lb/>In the following subsection we will test empirically the results previously presented <lb/>when t = 0  +  (continuous compliance framework). <lb/> 5.4 Empirical tests <lb/> 5.4.1 Implementation framework <lb/> The implementation framework used in this section is the same as the one presented in <lb/>Section 4. <lb/>The LSMC approach has been run on a sample of 50 000 independent NPV  0  +  out-<lb/>comes. To equalize the algorithmic complexity between both procedures, the CF ap-<lb/>proach has been launched on a sample of 100 independent <lb/> NAV  0  +  outcomes, calculated <lb/>as means of 500 NPV  0  +  (100 primary scenarios × 500 secondary scenarios). <lb/>To consider a more statistically efficient (due to the larger number of outcomes) <lb/>implementation framework we have chosen the LSMC methodology as a base to assess <lb/>the optimal set of regressors. <lb/>For each given number of risk factors J(J = 1, ..., 4), these have been designated <lb/>using a stepwise backward approach based on the AIC stopping criteria and on an <lb/>initialization set of potential regressors <lb/>  i  ε  k  .  j  ε  l  , ∀i, j ∈ 1; J, ∀k, l ∈ N|k + l ≤ 3 <lb/> , de-<lb/>noting by  i  ε  k  the i-th risk factor power k. <lb/> The implementation steps are the same for each value of J, <lb/> • assessment of the LSMC optimal set of regressors  J x and OLS estimator  J  ˆ <lb/> β  LSMC  , <lb/> • use of the  J x set of regressors to obtain the associated CF OLS estimator  J  ˆ <lb/> β  CF  , <lb/>

			<page>32 <lb/></page>

			• implementation of a Breusch-Pagan homoskedasticity test on the LSMC method-<lb/>ology (there are too few outcomes to use a Breusch-Pagan test on the CF ap-<lb/>proach), <lb/> • comparison of the confidence interval lengths obtained on the 50 000 primary <lb/>scenarios sample used to implement the LSMC approach. <lb/> 5.4.2 Heteroskedasticity test <lb/> In this study the heteroskedasticity has been tested using a Breusch-Pagan test. The <lb/>following results have been obtained on the various LSMC models on one, two, three <lb/>and four risk factors datas. <lb/>Table 5: Breusch-Pagan tests — LSMC datas. <lb/> LSMC methodology <lb/>1 risk factor 2 risk factor 3 risk factor 4 risk factor <lb/> Breusch-Pagan statistic <lb/>25.0 <lb/>41.6 <lb/>50.7 <lb/>76.0 <lb/>Breusch-Pagan p-value <lb/>5.8e-07 <lb/>7.3e-08 <lb/>2.2e-06 <lb/>4.5e-06 <lb/>The tests, and the homoskedastic assumption, are rejected even for a significance <lb/>level of 1%. Note that there are too few CF implementation data (100 outcomes per <lb/>number of risk factor) to assess robust homoskedasticity tests. <lb/>In the following subsections we will study the results obtained with both the LSMC <lb/> and CF methodologies (1/2/3/4 risk factors), for both the heteroskedastic and ho-<lb/>moskedastic formulas. <lb/> 5.4.3 Results in the homoskedastic framework <lb/> Turning from an homoskedastic to an heteroskedastic framework enables to obtain <lb/>more robust results. Moreover, the heteroskedastic scheme seems more adapted for <lb/>our study. However, the homoskedastic formulas provide interesting results that can be <lb/>compared to those obtained using the heteroskedastic formulas in order to conclude on <lb/>this empirical subsection. The comparison of the homoskedastic parameters covariance <lb/>matrix estimators provides the following results. <lb/> One risk factor framework. Only two significant regressors have been selected after <lb/>implementing a backward stepwise methodology with an AIC stopping criteria. <lb/>Table 6: LSMC covariance matrix eigenvalues — 1 risk factor (stock). <lb/> 1 risk factor <lb/> λ  1 <lb/> λ  2 <lb/> LSMC <lb/> 5.21e+15 2.69e+14 <lb/> CF <lb/> 5.80e+15 2.66e+14 <lb/>Here below we display the tables comparing the asymptotic confidence intervals&apos; <lb/>lengths on the 50 000 primary scenarios used in the LSMC implementation. <lb/>On average (on the 50 000 scenarios), the LSMC methodology leads to a slightly <lb/>smaller asymptotic confidence interval than the CF. Moreover, the LSMC approach <lb/>

			<page>33 <lb/></page>

			Table 7: Asymptotic confidence intervals lengths — 1 risk factor (stock). <lb/> 1 risk factor <lb/> LSMC <lb/> CF <lb/> Number of smaller asymptotic confidence <lb/>38 956 <lb/>11 044 <lb/>intervals (max = 50 000 scenarios) <lb/>(77.9%) (22.1%) <lb/>leads to a lesser asymptotic confidence interval for 77.9% of the 50 000 independent <lb/>scenarios considered here. <lb/> Two risk factor framework. Six significant regressors are selected after implement-<lb/>ing a backward stepwise methodology with an AIC stopping criteria. <lb/>Table 8: LSMC covariance matrix eigenvalues — 2 risk factors (stock, interest rates). <lb/> 2 risk factors <lb/> λ  1 <lb/> λ  2 <lb/> λ  3 <lb/> λ  4 <lb/> λ  5 <lb/> λ  6 <lb/> LSMC <lb/> 3.64e+18 6.02e+16 2.27e+16 4.77e+15 1.88e+15 2.08e+14 <lb/> CF <lb/> 3.64e+15 6.87e+14 2.18e+16 5.00e+15 1.87e+15 1.98e+14 <lb/>Here below we display the tables comparing the asymptotic confidence intervals&apos; <lb/>lengths on the 50 000 primary scenarios used in the LSMC implementation. <lb/>Table 9: Asymptotic confidence intervals lengths — 2 risk factors (stock, interest rates). <lb/> 2 risk factors <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>22 228 <lb/>27 772 <lb/>intervals (max = 50 000 scenarios) <lb/>(44.5%) (55.5%) <lb/> Three risk factor framework. Fourteen significant regressors are selected after im-<lb/>plementing a backward stepwise methodology with an AIC stopping criteria. <lb/>Here below we display the tables comparing the asymptotic confidence intervals&apos; <lb/>lengths on the 50 000 primary scenarios used in the LSMC implementation. 12 . <lb/>Table 10: Asymptotic confidence intervals lengths — 3 risk factors (stock, IR, corpo-<lb/>rate spread). <lb/> 3 risk factors <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>45 398 <lb/>4 602 <lb/>intervals (max = 50 000 scenarios) <lb/>(90.8%) <lb/>(9.2%) <lb/> Four risk factor framework. Thirty significant regressors are selected after imple-<lb/>menting a backward stepwise methodology with an AIC stopping criteria. <lb/>Here below we display the tables comparing the asymptotic confidence intervals&apos; <lb/>lengths on the 50 000 primary scenarios used in the LSMC implementation. <lb/> 
			
			<note place="footnote">12 The eigenvalues of the covariance matrix estimator, only presented as illustrations for the 1 and 2 risk <lb/>factors frameworks, are omitted, for the sake of simplicity, in the following studied cases. <lb/></note>

			<page> 34 <lb/></page>

			Table 11: Asymptotic confidence intervals lengths — 4 risk factors (stock, IR, corpo-<lb/>rate spread, sovereign spread). <lb/> 4 risk factors <lb/> LSMC <lb/> CF <lb/> Number of smaller asymptotic confidence <lb/>42 044 <lb/>7 956 <lb/>intervals (max = 50 000 scenarios) <lb/>(84.1%) (15.9%) <lb/>We now present the same results obtained without the homoskedasticity assump-<lb/>tions. <lb/> 5.4.4 Results in the heteroskedastic framework <lb/> Comparison of the homoskedastic parameters covariance matrix estimators. <lb/>Table 12: Asymptotic confidence intervals lengths — 1 risk factor (stock). <lb/> 1 risk factor <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>22 664 <lb/>27 336 <lb/>intervals (max = 50 000 scenarios) <lb/>(45.3%) (54.7%) <lb/>Table 13: Asymptotic confidence intervals lengths — 2 risk factors (stock, interest <lb/>rates). <lb/> 2 risk factors <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>10 656 <lb/>39 344 <lb/>intervals (max = 50 000 scenarios) <lb/>(21.3%) (78.7%) <lb/>Table 14: Asymptotic confidence intervals lengths — 3 risk factors (stock, IR, corpo-<lb/>rate spread). <lb/> 3 risk factors <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>18 803 <lb/>31 197 <lb/>intervals (max = 50 000 scenarios) <lb/>(37.6%) (62.4%) <lb/>Table 15: Asymptotic confidence intervals lengths — 4 risk factors (stock, IR, corpo-<lb/>rate spread, sovereign spread. <lb/> 4 risk factors <lb/> LSMC <lb/>CF <lb/> Number of smaller asymptotic confidence <lb/>17 513 <lb/>32 487 <lb/>intervals (max = 50 000 scenarios) <lb/>(35.0%) (65.0%) <lb/> 5.4.5 Conclusion on the empirical tests <lb/> Two major comments can be made after having studied these results. <lb/>

			<page>35 <lb/></page>

			First, it is important to go further than just considering the results obtained under <lb/>the homoskedasticity assumption. If these results alone are observed the LSMC ap-<lb/>proach seems to be the best methodology in most cases. This is not the case when one <lb/>studies the results provided without the homoskedasticity assumption. Nota that the <lb/>heteroskedastic framework is more robust in general and is much more realistic here, <lb/>considering the Breusch-Pagan tests. <lb/>Second, we can assume that the heteroskedasticity shape has a great impact on the <lb/>efficiency comparison between both CF and LSMC methodology in a finite sample <lb/>framework. In particular, this directly modifies thê <lb/> V  W hite  estimator. One should note <lb/>that there are several econometrical methods to reduce the heteroskedasticity of our <lb/>model that have not been tested here. For more insight on these approaches see Greene <lb/>[15]. <lb/>In any case, our study does not evidence any superiodity of the LSMC over the <lb/> CF methodology. However, we have clearly seen, throughout the implementation, <lb/>that the small number of outcomes considered in the CF approach leads to statistical <lb/>issues while assessing homoskedasticity tests and confidence intervals. The problem <lb/>here seems to comes from the fact that there are too few outcomes to get through <lb/>the sample bias embedded within the secondary scenarios tables used to calculate the <lb/> NAV  0  +  outcomes. In opposition, the sample bias that comes from the LSMC scenarios <lb/>is mitigated between the primary simulations. Eventually, the squared errors of the CF <lb/> implementation are lower than they would be if calculated on more outcomes, which <lb/>leads to artificially small confidence intervals. It is clear that this phenomenon takes <lb/>more and more importance as the number of risk factors / regressors rises. <lb/>To conclude we can only advise practitioners to prefer an LSMC methodology to <lb/>assess approximate NAV outcomes. The heteroskedasticity tests may always lead to a <lb/>rejection of the homoskedasticity assumption but the confidence intervals obtained will <lb/>always be more robust than those of a CF approach. <lb/>Note that this implementation and its conclusions correspond to a specific (but real-<lb/>istic) empirical framework. The authors did not aim at drawing general conclusions on <lb/>the use of parametric proxies for life insurance NAV projections. This section, initially <lb/>only aiming at challenging the generally agreed idea that the LSMC methodology is <lb/>more robust than CF in a large dimension context, is eventually a good opportunity <lb/>to raise proxies implementation issues such as the heteroskedasticity management and <lb/>the asymptotic confidence intervals assessment. <lb/>The authors notice that Subsection 5.4 could have been completed with the compar-<lb/>ision the CF and LEMC results to real values of NAV . However, the real NAV outcomes <lb/>are unobservable in practice and good estimators imply a great number of secondary <lb/>scenarios. We wanted here to stay in a practical scheme, with great algorithmic con-<lb/>straints. <lb/> 6 Conclusion <lb/> The continuous compliance requirement is a key strategic issue for European insur-<lb/>ers. In this article, we have presented the various characteristics of this problem and <lb/>provided a monitoring tool to answer it. Our monitoring scheme is based on the imple-<lb/>mentation of parametric proxies, already used among the insurance players to project <lb/>

			<page>36 <lb/></page>

			the Net Asset Value over time, adapted to fit the ORSA continuous compliance re-<lb/>quirements. The tool has been implemented on a realistic life insurance portfolio to <lb/>present the main features of both the development and the practical use of the monitor-<lb/>ing scheme. In particular several other relevant possible uses for the tool are presented. <lb/>In the last Section we have seen that the comparison of the Curve Fitting and the Least <lb/>Squares Monte Carlo methodologies in a finite sample framework and considering an <lb/>increasing dimensioning scheme, did not lead to firm conclusions but that the Least <lb/>Squares Monte Carlo led to fewer statistical issues especially to assess robust asymp-<lb/>totic confidence intervals. In addition, this section has been an opportunity to raise <lb/>several practical issues, such as the heteroskedasticity management and asymptotic <lb/>confidence intervals calculation, concerning the use of polynomial proxies (both Least <lb/>Squares Monte Carlo and Curve Fitting), in our framework (a life insurance savings <lb/>product). <lb/>Note that the monitoring tool only provides approximate values and is based on <lb/>assumptions that can be discussed. The authors notice that the modelling choices can <lb/>lead to errors. In particular we can only advice the future users of our tool to update <lb/>the proxies frequently in order to make sure that the underlying stability assumptions <lb/>are reasonable. One of the future axes to investigate is clearly to aim at a better control <lb/>of the error and to address in depth the issue of the proxies recalibration frequency. <lb/>In addition the possibility to add asset-mix weights in the monitored risk factors set <lb/>should be tested. This would greatly help asset managers to select optimal asset-mixes, <lb/>consistently with the risk strategy of the undertaking. <lb/>Eventually we intend to investigate the various possibilities provided by the econo-<lb/>metric theory to optimize the proxies calibration process in order to decrease the het-<lb/>eroskedasticity of our models and the volatility of the obtained estimators. <lb/> 
			
		</body>
			
		<back>	
					
			<div type="acknowledgement">Acknowledgement <lb/> The authors would like to address very special thanks to Fabien Conneau, Laurent <lb/>Devineau and Christophe Vallet for their help all along the redaction of this papier. We <lb/>would also like to thank Stephane Loisel for his relevant comments throughout the final <lb/>review of the article. <lb/>Moreover, we would like to extend our thanks to all the employees of Milliman <lb/>Paris, and in particular the members of the R&amp;D team. <lb/></div> 
			
			<listBibl>References <lb/> [1] Algorithmics. Curve Fitting for Calculating Solvency Capital Requirements un-<lb/>der Solvency II: Practical insights and best practices from leading European In-<lb/>surers, 2011. <lb/>[2] François Bonnin, Frédéric Planchet, and Marc Juillard. Calculs de best estimate <lb/>de contrats d&apos;´ epargne par des formules fermées application <lb/>a l&apos;ORSA. Les cahiers <lb/>de recherche de lISFA, 2012. <lb/>[3] Trevor S Breusch and Adrian R Pagan. A simple test for heteroscedasticity and <lb/>

			<page>37 <lb/></page>

			random coefficient variation. Econom.: J. of the Econom. Soc., pages 1287–1294, <lb/>1979. <lb/>[4] Mark Broadie, Yiping Du, and Ciamac C Moallemi. Efficient risk estimation via <lb/>nested sequential simulation. Manag. Sci., 57(6):1172–1194, 2011. <lb/>[5] European Commision. Draft Implementing measures Solvency II, 2011. <lb/>[6] European Commission. Directive 2009/138/EC of the European Parliament and <lb/>of the council of 25 November 2009 on the taking-up and pursuit of the business <lb/>of Insurance and Reinsurance (Solvency II), 2009. 17/12/2009,L335/1. <lb/>[7] European Commission et al. QIS5 technical specifications. Bruss., p152, 2010. <lb/>[8] Bruno Crépon and Nicolas Jacquemet. Econométrie: Méthodes et Applications. <lb/> de boek, 2010. <lb/>[9] Laurent Devineau. La méthode des simulations dans les simulations. Mise en <lb/>oeuvre d&apos;un mod ele actif / passif en assurance-vie : quelles techniques ?, Part 2. <lb/> Milliman, 2011. slides. <lb/>[10] Laurent Devineau and Matthieu Chauvigny. Replicating portfolios: calibration <lb/>techniques for the calculation of the Solvency II economic capital. Bull. Fr. <lb/>d&apos;Actuar., 21:59–97, 2011. <lb/>[11] Laurent Devineau and Stéphane Loisel. Risk aggregation in Solvency II: How to <lb/>converge the approaches of the internal models and those of the standard formula? <lb/> Bull. Fr. d&apos;Actuar., 9(18):107–145, 2009. <lb/>[12] Francis X Diebold and Canlin Li. Forecasting the term structure of government <lb/>bond yields. J. of econom., 130(2):337–364, 2006. <lb/>[13] Norman R Draper, Harry Smith, and Elizabeth Pownell. Applied regression anal-<lb/>ysis, volume 3. Wiley New York, 1966. <lb/>[14] Stephen M Goldfeld and Richard E Quandt. Some tests for homoscedasticity. J. <lb/>of the Am. Stat. Assoc., 60(310):539–547, 1965. <lb/>[15] William H Greene. Econometric Analysis–International edition. New York Uni-<lb/>versity, 2003. <lb/>[16] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The elements of statisti-<lb/>cal learnin, 2009. <lb/>[17] Barrie &amp; Hibbert. A Least Squares Monte Carlo approach to liability proxy mod-<lb/>elling and capital calculation, 2011. <lb/>[18] Roger A Horn and Charles R Johnson. Matrix analysis. Cambridge university <lb/>press, 2012. <lb/>[19] Robert A Jarrow, David Lando, and Stuart M Turnbull. A Markov model for the <lb/>term structure of credit risk spreads. Rev. of Financ. stud., 10(2):481–523, 1997. <lb/>[20] Tigran Kalberer. Stochastic determination of the value at risk for a portfolio of <lb/>assets and liabilities. Der Aktuar, pages 10–12, 2012. <lb/>

			<page>38 <lb/></page>

			[21] Stéphane Loisel and Hans-U Gerber. Why ruin theory should be of interest for <lb/>insurance practitioners and risk managers nowadays. In Proceedings of the AF-<lb/>MATH Conference 2012, pages 17–21, 2012. <lb/>[22] Francis A Longstaff, Sanjay Mithal, and Eric Neis. Corporate yield spreads: <lb/>Default risk or liquidity? new evidence from the credit default swap market. The <lb/>J. of Financ., 60(5):2213–2253, 2005. <lb/>[23] Filip Lundberg. I. Approximerad framstallning af sannolikhetsfunktionen: II. <lb/>Aterforsakring af kollektivrisker. Uppsala., 1903. <lb/>[24] James G MacKinnon and Halbert White. Some heteroskedasticity-consistent co-<lb/>variance matrix estimators with improved finite sample properties. J. of Econom., <lb/> 29(3):305–325, 1985. <lb/>[25] Alain Monfort. Cours de statistique mathématique. Economica, 1988. <lb/>[26] Teivo Pentikäinen. Solvency of insurers and equalization reserves. volume I: <lb/>General aspects, 1982. <lb/>[27] Teivo Pentikäinen, Heikki Bonsdorff, Martti Pesonen, Jukka Rantala, and Matti <lb/>Ruohonen. Insurance solvency and financial strength. Finnish Insurance Training <lb/>and Publishing Company Helsinki, 1989. <lb/>[28] Pierre Petauton. Théorie et pratique des opérations de l&apos;assurance vie, 2002. <lb/>[29] Ronald L Plackett. Some theorems in least squares. Biom., 37(1/2):149–157, <lb/>1950. <lb/>[30] Gilbert Saporta. Probabilités, analyses des données et statistiques. editions Tech-<lb/>nip, 2006. <lb/>[31] Alain Tosetti, Bernard Paris, Patrice Paslky, and Franck Le Vallois. Gestion Actif <lb/>Passif en Assurance vie: réglementation, outils, méthodes. ´ <lb/>Economica, 2003. <lb/>[32] Julien Vedani and Laurent Devineau. Solvency assessment within the ORSA <lb/>framework : Issues and quantitative methodologies. Bull. Fr. d&apos;Actuar., 13(25): <lb/>35–71, 2013. <lb/>[33] Julien Vedani and Pierre-Axel Virepinte. Mod ele structurel de crédit dans une <lb/>optique assurantielle, 2011. <lb/>[34] Halbert White. A heteroskedasticity-consistent covariance matrix estimator and <lb/>a direct test for heteroskedasticity. Econom.: J. of the Econom. Soc., pages 817– <lb/>838, 1980. <lb/></listBibl>

		<page>39 </page>

		</back>
	</text>
</tei>