Reinsurance pricing has a reputation for being quite complex and heavily dependent on how it’s always been done in the page. To some extent, that reputation is earned. Treaty structures can be very complicated, multi-line, multi-currency and packed with contract features that change the economics in subtle ways.

The data can also be incomplete or inconsistent, while the timeline is unforgiving, especially around 1/1 when decisions are made quickly and repeated at scale.

But complexity does not have to mean confusion.

This guide breaks down the most common reinsurance pricing methods in simple language, while still going deep enough to reflect how pricing is actually done in professional teams. Along the way, we’ll explain what a reinsurance pricing model is, how it is built, when each method works best, and where things commonly go wrong.

We also share how modern workflows can reduce the time spent formatting, reconciling and rebuilding the same logic every year.

Related:

What is a reinsurance pricing model?

A reinsurance pricing model is a structured way to estimate the expected cost of a treaty and translate that cost into a premium that is commercially viable, technically defensible and consistent with your risk appetite.

In practice, a pricing model does three jobs.

First, it estimates expected losses that may be based on historic experience, exposure-based assumptions, simulation, or a mixture.

Second, it reflects the structure of the cover. Reinsurance contracts are not priced “ground-up” in the same way as many direct insurance products. Attachment points, limits, reinstatements, aggregate features, profit commissions, sliding scales, event definitions and hours clauses can all change how losses flow into the treaty layer.

Third, the model translates expected losses into a price. That translation typically involves loadings for expenses, commissions and brokerage, as well as including a margin for risk and profit. Some organisations explicitly reflect the cost of capital or modelled volatility. Others embed it more informally through target returns, pricing adequacy tests or portfolio steering.

If you are new to this area, it helps to remember a simple idea: pricing is not just maths. It is the combination of data, assumptions, contract interpretation, commercial judgement, and communication.

Reinsurance pricing basics in plain English

Before we get into the reinsurance pricing methods, it helps to define a few concepts you will see repeatedly.

Expected loss and loss ratios

The expected loss is the average loss you anticipate over the treaty period, based on your model and assumptions. It is often expressed as a loss ratio (expected loss divided by premium) or as an expected cost in currency terms.

Learn more: What Is the Expected Loss Cost of an Insurance Policy?

Layers and how losses enter the cover

For excess of loss, you are typically pricing a layer. Losses below the attachment do not affect the treaty. Losses above the attachment may be capped by the limit. This sounds simple until you add aggregation, reinstatements, multiple sections, multiple lines of business or different definitions of occurrence.

Development, trending and rate changes

Historic losses are rarely “ready to price”. They may need development to a future cost level and adjustment for exposure changes and rate changes. Rate change information is often missing or inconsistent, yet it can materially affect the interpretation of loss experience.

Expenses, commissions, brokerage and profit load

Reinsurance pricing tends to involve multiple outflows beyond losses. A practical model reflects the economics that matter to the decision, whether that is a technical premium, a target price or a market-facing street price.

What you actually price to

Different teams focus on different outputs, depending on treaty type and market practice. Common ones include:

  • Rate on line (premium divided by limit) for excess layers
  • Technical premium or risk premium for proportional business
  • Combined ratio or expected underwriting result for internal sign-off
  • Expected profit and variability measures for portfolio steering

With those basics in place, we can look at the three broad families of methods used across the market.

The three families of reinsurance pricing methods

Most of the reinsurance pricing methods sit within one of these families, even if the method names may vary between organisations.

1) Experience-based methods

These rely primarily on the cedant’s historic loss experience, adjusted to reflect your view of the future and mapped into the treaty layer.

You’ll hear terms like experience rating, burning cost, loss ratio methods and in-layer experience analysis.

2) Exposure-based methods

These rely on exposure information rather than treaty loss history. They are often used when experience data is sparse, unreliable, or not sufficiently granular.

You’ll hear terms like exposure rating, exposure curves, loss elimination ratios, ILFs (increased limits factors), excess factors, power curves and severity curves.

3) Model-based or simulation-based methods

These build up losses from frequency-severity models or event-based models, sometimes incorporating dependence, aggregation and stochastic variability.

You’ll see these most often in catastrophe reinsurance, aggregate covers and more complex multi-line programmes, especially when the goal is to understand risk distribution, not just an average.

In real-world pricing, teams often blend methods, because each method has a different failure mode. A good pricing process is about triangulation, not blind faith.

Experience rating: pricing based on historic treaty loss experience

Experience rating is often the first place teams go when the data is credible. It aligns with how a lot of underwriters think: “What has this account done, and what will it do next?”

At its core, experience rating is a structured way of taking historic losses and turning them into an expected future cost for the cover you are pricing.

The basic steps of experience rating

Most experience rating workflows include the following steps, whether they are written down explicitly or baked into templates.

First, you clean and reconcile the data. That means aligning premiums, claims and treaty terms across years, and checking whether you are looking at the right basis.

Second, you adjust losses into the layer. A large share of mistakes come from this step. If a loss is reported gross of the treaty, it needs allocation to the treaty layer. If the data is already net of retention, you need clarity on what that “net” means. If claims are provided without sufficient detail, the layer allocation becomes a judgement call, and those judgement calls compound over renewals.

Third, you develop losses to ultimate. Many treaty submissions include triangles, but the basis and completeness varies. Development assumptions can dominate the result, particularly for long-tail casualty.

Fourth, you trend losses to the future. Even in property, inflation and social inflation can change severity. In casualty, trends can be particularly unstable when you combine changes in claims environment, limits and court behaviours.

Fifth, you adjust premiums and exposure. If premium is moving because of rate changes and exposure shifts, a raw historic loss ratio can be misleading. In practice, this is where many pricing analyses become fragile, because rate change information is missing or not defined consistently.

Finally, you select an expected loss. Selection is not just arithmetic. It is a decision that balances data credibility, contract interpretation, portfolio context and market conditions.

Where experience rating shines

Experience rating works best when:

  • The treaty has stable structure year-on-year
  • Loss data is credible and sufficiently developed
  • Exposure and underwriting approach have not fundamentally shifted
  • The portfolio is large enough that experience reflects underlying risk rather than a few random outcomes

Where experience rating breaks down

Experience rating becomes unreliable when:

  • Data is incomplete, inconsistent or lacks detail on large losses and development
  • Treaty terms have changed materially, especially attachment, limit or aggregation
  • The business mix has shifted, for example new territories, new perils, or different limits profiles
  • The loss record is dominated by one event, one claim, or a small number of losses

In such cases, exposure rating often becomes the primary method, with experience rating used as a sense-check.

Exposure rating: pricing when experience is not enough

Exposure rating is sometimes misunderstood as “what you do when you have no data”. In reality, it is what you do when the loss experience does not answer the question you actually need to price.

For example, a risk excess of loss treaty might have few claims in the layer, but the exposure profile might contain meaningful information about the potential for large losses. Similarly, a treaty might have a short experience window that does not capture the tail behaviour you care about.

Exposure rating uses information about the insured portfolio and applies a severity curve, ILFs, or other factors to estimate the expected loss cost in a given layer.

There are two broad approaches to exposure rating:

Exposure curves and loss elimination ratios

In property-style modelling, exposure curves often represent the relationship between the size of loss and the amount at risk, expressed as a proportion of sum insured or total insured value. A loss elimination ratio is a way of translating a cap or limit into a reduction in expected loss.

This approach can work well where sum insured is a meaningful proxy for maximum loss and where the peril behaviour supports the assumptions.

ILFs, excess factors and power curves

In liability and specialty reinsurance, the relationship between limit purchased and loss potential is more complex. Limit is not necessarily a proxy for exposure – policyholders choose limits strategically, and the tail behaviour can be heavy.

This is where ILFs (increased limits factors) and power curves are commonly used. They provide a practical way to estimate how expected loss changes as limits increase, and how to allocate loss costs into layers.

The power curve approach is widely used in parts of the London Market. It has “nice” mathematical properties, including scale invariance across currency, and it can be implemented in a transparent, closed form. That said, the assumptions still matter, especially around deductibles, original attachments, stacking and the treatment of claims expenses.

Exposure rating is powerful, but it is only as good as the assumptions and the quality of the exposure information. When limits profiles are incomplete, when exposure splits are too coarse, or when inflation and mix shifts are ignored, exposure rating can generate a false sense of precision.

The most effective teams treat exposure rating as a disciplined framework for judgement, not as a replacement for judgement.

Frequency-severity models: building a reinsurance pricing model from first principles

​Frequency-severity models sit between experience rating and catastrophe modelling.

Instead of starting with historic in-layer losses, you start by modelling:

  • how often claims occur (frequency)
  • how large claims are when they occur (severity)

You then combine these to estimate expected losses and, if needed, simulate a distribution of outcomes.

Why frequency-severity modelling is useful in reinsurance

​Frequency-severity modelling helps when:

  • You want to separate claim count changes from claim size changes
  • You need to adjust for changes in attachment or limit
  • You want a coherent view across multiple covers or perils
  • You need to stress scenarios, not just produce a single expected value

The practical considerations that matter

​This is where many “textbook” explanations fall short. In reinsurance, you often face:

  • Censoring and truncation, because you only observe claims above a certain threshold, or because data is capped
  • Dependency, because claims are not independent, especially around event-driven losses or correlated exposures
  • Aggregation, because treaty structures bundle risks across locations, policyholders, or lines of business
  • Sparse data, because you may have limited history for a specific structure or segment

A robust reinsurance pricing model makes those limitations explicit. It does not pretend that thin data can support overly granular conclusions.

Catastrophe and aggregate modelling: when you need event-based views

​For catastrophe excess of loss, event-based models are often central to pricing. These models simulate event losses based on hazard, vulnerability, exposure and financial terms.

Even if you do not build cat models in-house, understanding what they do helps you interpret outputs sensibly. Cat model outputs are not “the truth”. They are a structured set of assumptions. They should be reconciled with exposure, historic events and market intelligence.

Aggregate covers and stop loss present a different challenge: you care about the accumulation of many losses, not just a single large loss. That often requires an understanding of correlation, frequency variability and the way underwriting changes affect aggregate outcomes.

How real treaty pricing is done: blending methods

​Most high-performing teams do not pick one method and stop. They triangulate.

A practical workflow often looks like this:

  • You start with data sanity checks, confirm the basis of the data, reconcile prior-year numbers and identify missing elements early.
  • You build an experience view – even if it is thin, it provides context.
  • You build an exposure view, as this gives you a forward-looking lens that can still work when experience is distorted.
  • You reconcile and select, looking at why the methods differ and what that difference is telling you.
  • You stress and scenario test – testing sensitivity to key assumptions, because it is rare that the “central” view is the only view that matters.

This blended approach also improves communication. Underwriters and decision makers tend to trust analyses that explain uncertainty and trade-offs, rather than analyses that present a single number with false confidence.

What “good” looks like in a modern reinsurance pricing process

​Most pricing frustrations are not caused by the maths. They are caused by the workflow.

A modern treaty pricing stack tends to have these characteristics:

Clean data in, clean data out

​The team can capture and validate submissions, premiums, claims and exposure profiles in a consistent way. The same data feeds analysis and reporting, rather than being retyped or reformatted repeatedly.

Methods on tap

​Standard methods are accessible and configurable without rebuilding the wheel every renewal. That means a consistent approach across the team, with room for judgement where judgement belongs.

Workflow that mirrors underwriting

The pricing record follows the same narrative as underwriting. Submission, analysis, selections, commentary, approvals and outputs live together.

Collaboration by design

Parameters, benchmarks and assumptions can be shared across team members in real time. Peer review is built into the workflow, rather than relying on emailing spreadsheets.

Performance that keeps up with reality

The platform responds quickly enough to support commercial conversations. Waiting hours for a scenario run creates behavioural incentives to avoid running scenarios at all.

If that sounds aspirational, it does not have to be. Many teams can move towards this incrementally, as long as they are clear about the “why”.

Common pitfalls that undermine pricing quality

Even experienced teams can fall into traps that weaken pricing decisions. These are some of the most common.

Confusing data bases and definitions

Accident year, underwriting year, policy year, paid vs incurred, gross vs net, inflation treatment and rate change definitions can all change the meaning of the same dataset. A pricing model should be explicit about these choices.

Double counting and mismatch issues

One underlying claim can appear as multiple records. Event losses can be split across policies. Data exports can create duplicates. These issues are easy to miss when teams are stitching data manually.

Silent drift across renewals

When templates evolve informally, small logic changes can creep in. A pricing assumption that used to be applied consistently becomes inconsistent over time, especially when files are copied, modified and passed around.

Process bottlenecks and single points of failure

When one spreadsheet is owned by one person, the team’s ability to work scales poorly. That can slow renewals, limit peer review and increase operational risk.

These pitfalls are not “Excel problems” alone. They are workflow and governance problems. Excel just makes them easier to hide.

How MatBlas can help: training, advisory, and practical tooling

Reinsurance pricing is a craft. It develops faster with the right training, framework and tools.

For teams that want to strengthen capability, structured learning matters. Some people need the fundamentals. Others need practical depth in specific methods such as exposure rating, ILFs, claims development, or treaty structuring. Often, the biggest gains come from bringing underwriters, actuaries and analysts onto the same page, with shared language and shared modelling logic.

Learn more about our Actuarial Training services.

For teams that need support on live work, external advisory can help with:

  • reviewing pricing models and assumptions
  • designing consistent pricing frameworks
  • improving data capture and benchmarking
  • strengthening governance, documentation, and audit readiness

Learn more about our Actuarial Consultancy services.

SmartRe and modern treaty pricing workflows

A recurring theme in this guide is that pricing quality depends on workflow as much as it depends on method.

SmartRe was built around that reality. It is not about replacing actuarial judgement or reinventing the maths. It is about giving treaty pricing teams a platform that matches how reinsurance actually works.

In practice, modern platforms can shift time away from data wrangling and towards the activities that create value during renewals: analysis, selection, negotiation, peer review, and portfolio insight.

Learn more about our SmartRe platform.

FAQs

What are reinsurance pricing methods?

Reinsurance pricing methods are the approaches used to estimate expected treaty losses and translate them into a premium, typically using experience rating, exposure rating, frequency-severity models, catastrophe models, or a mixture of two or more of these.

What is a reinsurance pricing model?

A reinsurance pricing model is the structure, logic and assumptions used to price a treaty. It typically covers data preparation, loss estimation, mapping to treaty terms, and translating expected losses into a premium and performance view.

What is the difference between experience and exposure rating?

Experience rating relies primarily on historic loss experience. Exposure rating relies primarily on exposure information and severity curves or ILFs, often used when experience is limited or not representative.

Why do reinsurance teams still use Excel?

Excel is familiar and flexible, but it can create version control issues, audit challenges, and scaling limits as treaty complexity and data volumes grow. Many teams stay with Excel because alternatives feel rigid, slow, or too dependent on IT.

Ana Mata

Ana Mata

Managing Director and Actuary