When people talk about solar-PV fire risk, the first question is almost always:
“But how many PV fires are there really?”
It sounds reasonable and scientific. Insurers want loss data. Fire services want incident statistics. Policy-makers want trend graphs before they move.
The problem is simple and brutal: for PV fires, the data we have today is full of holes by design. If you wait for “perfect statistics” before you act, you are effectively choosing to stay blind.
That’s why a consequence-based approach is not just useful – it’s essential. For the next few years, calculating consequences of a PV fire at a given site will often tell you more than any national database can.
1. Why today’s PV fire data can’t be trusted on its own
In most countries (including South Africa), PV fire statistics are weak for structural reasons, not because PV is magically safe.
1.1 PV isn’t tagged properly in incident systems
Fire incident reporting systems were not built with PV in mind. A typical form might have:
- “Electrical fire”
- “Roof fire”
- “Equipment fault”
…but no dedicated tick-box for “solar PV present” and no way to mark “PV suspected as ignition source”.
So even when a fire service attends a real PV-driven roof fire, the database might record it as:
“Electrical – other” in a normal building fire.
The PV disappears into the noise. When you later pull national stats, PV almost doesn’t exist – not because it isn’t burning, but because it was never labelled.
1.2. Near-misses are almost invisible
The most important signals in a complex system are often not the disasters, but the “almost disasters”:
- DC connectors that melted but didn’t ignite timber
- DC isolators that smoked and were quietly replaced
- Combiner boxes that showed scorch marks or arc damage
- Cable damage found on thermography inspections
These events rarely go into a central, searchable system. An installer might fix them. An O&M contractor might log them locally. An insurer might never hear about it.
From a risk perspective, these near-misses show you how often the system tries to fail. From a data perspective, they are mostly invisible.
1.3. No regulatory or contractual duty to report
In most markets, no standard regulation or insurance condition tells people clearly:
- “If a PV component fails or nearly ignites, you must report it to X, and they must record it like this.”
Without that, everyone assumes someone else is tracking the problem. Result:
- Installers fix and move on
- Owners claim under “electrical”
- Fire services fight the fire and close the file
You end up with national statistics that say “PV fires are rare”, but only because nobody asked the PV question on the form.
1.4. Rapid rollout + mixed quality = hidden risk
Most countries have seen a rapid, sometimes chaotic rollout of rooftop PV:
- Highly variable installer competence
- Grey-market components and poor QA
- Limited inspections or enforcement
- Harsh climate and rooftop environments
All of these increase the underlying defect and ignition risk, but not the recorded risk. The more we install without proper tracking, the bigger the gap between “what’s in the database” and “what’s happening on roofs”.
2. The actuarial trap: waiting for numbers that can’t exist
Insurers and policy-makers are used to a world where:
“We’ll act when the data is strong enough.”
For PV fire risk, that mindset becomes a trap:
- The system isn’t designed to see PV risk
- So the statistics show almost nothing
- So nobody justifies the cost of better controls
- So the system remains blind
It’s a closed loop of inaction.
In that situation, chasing perfect frequency data is less useful than asking a different question:
“If this PV system did catch fire, what would the consequences be?”
That question you can answer today, very clearly and very locally, without pretending to have national-level fire curves.
3. Consequence-based thinking: what it actually means
When we talk about “calculating consequences”, we’re not hand-waving. We’re doing structured, repeatable assessment along three dimensions:
- People (P) – impact on life and health
- Could anyone realistically be injured or killed?
- How many people, how vulnerable, how complex is evacuation?
- What about the firefighters under that live DC roof?
- Assets (A) – damage to property and equipment
- What could be destroyed?
- A few panels and some sheeting, or an entire roof and all stock below it?
- Are there high-value assets: machinery, switchgear, archives, servers?
- Business / Service (B) – impact on operations
- How long would operations or services be interrupted?
- Is this a warehouse, a small office, a school, a hospital, a data centre?
- Would the community lose an essential service?
In practice, you score each 1–5 (low to very high) and combine them into a Consequence score C (1–5) with clear rules and a strong bias toward life safety.
The exact maths can vary, but the essence is:
- People dominates the score
- Assets and service disruption refine the picture
- You end up with a consistent, transparent “how bad could it be here?” number
This is not guesswork. It’s structured engineering judgement, based on building type, layout, occupancy, construction, and what sits under the modules.
4. Why consequence matters more than incomplete PV fire data
4.1. Consequence tells you where you cannot afford to be wrong
Imagine two roofs:
- A small farm shed with a 5 kW array over non-combustible sheeting, rarely occupied.
- A hospital ICU ward with a dense array over combustible roofing, full of oxygen, cables and vulnerable patients.
National statistics might show “very few PV fires”. But if we are honest:
- A PV fire on the shed is unfortunate.
- A PV fire on the ICU is catastrophic.
The consequence is completely different, even if the likelihood of an ignition event is similar.
So as a fire service, insurer or risk manager, which one keeps you awake at night? The ICU, every time.
Consequence-based scoring forces you to prioritise the sites where being wrong has the highest price, regardless of how incomplete the fire database still is.
4.2. Consequence is local and knowable today
You don’t need ten years of perfect incident data to know:
- This school’s only escape route is under a live array.
- This warehouse holds flammable stock under a massive DC field.
- This clinic’s emergency power is critical for patient survival.
Those are site facts, not statistics. You can inspect them, document them, and score them now.
4.3. Consequence guides sensible, targeted controls
Once you understand C (the consequence), you can tailor your effort:
- High-consequence sites (hospitals, malls, big schools, critical infrastructure):
- Clear shutdown / de-energisation methods
- Stricter design standards and component choices
- More frequent inspections and thermography
- Mandatory, PV-specific firefighter SOPs and tools
- Lower-consequence sites (small standalone sheds, remote carports):
- Proportionate requirements, fewer bells and whistles
- Lower inspection frequency
You get more safety per Rand spent, because you are not treating every roof like an ICU, and you are not pretending that all roofs are equally harmless.
5. Where likelihood still fits in (and how we use data honestly)
None of this says that frequency data is useless. It’s just not ready to drive decisions on its own.
The mature way to think about PV fire risk is:
Risk = Likelihood × Consequence
- Consequence (C) you can score reliably today using a structured framework (People, Assets, Business).
- Likelihood (L) you derive from technical risk factors:
- DC connectors and isolators
- Workmanship quality
- Array design
- Environmental exposure
- Maintenance and inspection regime
As we improve data capture — better incident tags, near-miss reporting, insurance flagging — the L side can be refined using real numbers:
- “Connectors of type X, in climate Y, with age Z, show this failure rate.”
But we don’t have to sit on our hands while we wait for that future. We can act now on high-consequence sites based on what we already know about fire behaviour, building use, and PV failure modes.
6. The practical message for decision-makers
If you’re an insurer, a fire service, a facilities manager or a regulator, the takeaway is simple:
- Accept that current PV fire stats are under-reporting by design.
The lack of numbers does not equal lack of risk. - Start evaluating consequence at site level.
Use a transparent 1–5 scale for People, Assets and Business / Service, and combine them into a single Consequence score. - Prioritise high-consequence roofs and systems now.
These are the ones where a single PV-initiated fire can produce deaths, huge losses or public scandal. - Improve data in parallel, not first.
Add PV tick-boxes to incident reports, encourage near-miss reporting, and ask insurers to flag PV-related claims — but don’t wait for the perfect dataset before you move. - Use the formula, but respect the reality.
Risk = Likelihood × Consequence. Right now, Consequence is the side we can measure best. Use it.
Final thought
In a world where the PV fire dataset is full of holes, insisting on “perfect numbers” before taking PV safety seriously is like refusing to wear a seatbelt until you’ve personally reviewed every crash report in the country.
We already know enough to see where a PV fire would be devastating.
Consequence-based assessment simply forces us to admit it — and act accordingly.
Contact you PVSTOP Tecnical partner today

