Collecting and processing climate data
The ‘rigorous’ records of climate go back to the beginning of the 20th century. This represents tens of thousands of weather stations’ data for land, with equivalent data for oceans (from ship logs).
Analysis of this vast data set needs to account for various potential biases, such as uneven spatial representation and changes in instruments. In plain speak, there were cases of lots of data from some areas and less data from other areas. This could create misrepresentation in the outcomes unless robust well-tested methods are used to fill the gaps. Scholes explains that analysis has not been about “taking an average” of all the data over time.
Analysis of the data – presenting ‘warming planet’ outcome
The data analysis showed that warming has been observed nearly everywhere over the 20th century. Rainfall trends are weaker and less consistent because rainfall is inherently a more local phenomenon. (See the Synthesis Report from the Intergovernmental Panel on Climate Change – IPCC.)
Managing criticism – showing data is robust
The interpretation of the data set came in for a lot of criticism by sceptics in the SET community and vested interest groups, particularly in the USA. Most of the debate was around the processes applied to make the raw data comparable. Had these processes been manipulated to gain a specific result?
The critics took the same data and then used different scientific methodologies. Their results were qualitatively the same as the original outcomes, with little varying detail: the world has warmed, virtually everywhere, at an accelerating rate over the period of record. In other words, the conclusions are robust, independent of method. (For further info, see the Hockey Stick Controversy.)
Acknowledging uncertainties vs providing facts
Good scientists are careful people. They check and recheck their results, and then let other people check their results. Scientists are obligated to follow rules of evidence, including acknowledging uncertainties. This is often confusing for the layperson as it means a lot of the information comes with ‘ifs and buts’.
The IPCC has focused on how to communicate uncertainty in a clear way, using words which are reserved for that purpose only. Scholes says that the IPCC guidelines note that phrases such as ‘with high certainty’ have an exact defined meaning, and accompany all high-level statements.
The public, unused to the concept of scientific uncertainty, can interpret this as the scientists being less confident than they actually are. Or less sure than lay people who never qualify their statements with confidence terms. Scientists must learn how to deal with this while communicating clearly and accurately. Is it the SET community’s responsibility to explain the scientific process of acknowledging uncertainty? Or does the public need to make more effort to understand the concept?
Detection, attribution, and impact – differentiating natural from human causes
Consider that we have a time series of climate observations. The first stage is change detection – is something unusual happening? Has there been a statistically significant change in the system?
The next stage is attribution – do we have reasonable statistical confidence that we know why this change has occurred (80-100% certainty)? Is it accounted for by natural variation, or is there a human-attributable effect as well? Attribution is a more challenging and complex problem because there are usually many causes to any observed effect.
Scientists have been able to exclude known causes of climate variation (such as solar and orbital variation, and volcanoes). There is also positive correlation between climate trends with suggested human-induced causes, specifically greenhouse gas concentrations, a necessary but not sufficient condition for establishing the cause.
More than 30 groups worldwide have run global climate simulations, providing a reconstruction of what has been observed and an explanation of what has happened. Scholes says they were able to apportion cause to various sources, including anthropogenic (caused by humans) versus natural, and the natural variation only accounts for a small fraction of the total. While there is debate around details, the human influence on climate has been proven.
The focus is now on the impact of climate change and what can be done about it. Climate change-related impacts have been detected worldwide in almost every area, from biodiversity and food security to water resources. Attribution specifically to human-caused climate change is work in progress in many cases.
A common question is around whether an extreme weather event (like a tornado or tropical storm) can be attributed to climate change. Because climate is the statistical average of weather, Scholes says it’s hard to say any singular event is due to climate change. It needs a sequence of such events to be confidently classed as change. Since extreme events are – by definition – rare, this needs a long record – hundreds of years – to say with high confidence. Currently we don’t have long enough records to make this claim.
Is a source trustworthy?
How can the public assess the validity of claims when they receive conflicting information? Scholes sees scientists as brokers in this process. He says it’s about showing people how to separate the legitimate from the misguided, mischievous, and malicious.
Following are Scholes’ guiding questions:
· Does the source of information have qualifications and a track record in the specific field they are commenting on? Several denialists have apparently high credentials or have positions of note but, if you look at their area of research, it isn’t within the debate domain.
· Do they offer verifiable evidence, or just assertions? Do they publish in peer-reviewed journals? You need to find out if the data is in the public domain and in peer-reviewed journals. Self-references, websites, newspaper articles, and untraceable references are not considered verifiable evidence.
· Do they repeat long-disproven claims and conspiracy theories? Climate denialists tend to stick to their message regardless of the strength of evidence refuting it.
Scholes explained that there is now a move to use ‘deep transdisciplinary’ approaches in order to turn climate concern into action. This sees scientists working with people who have a different epistemology (theory of knowledge or world view). Examples include representatives from various faiths and people involved in indigenous knowledge systems.
The idea is that if you want to affect behavioural change, you need to work within the conceptual framework used by the target community. It also recognises that human decisions rest not only on evidence, but also on beliefs and feelings.
Taking time and effort to sift through information?
Global warming exists and it’s largely caused by human activities. While these fundamentals have been agreed upon, there is still strong debate among scientists around the details of climate change.
There is also lots of ongoing research but this isn’t getting through – across the range of stakeholders including business, civil society, and the public. With the advent of fake news and post-truth, among other things, there is a clear need for people to apply analytical rigour when assessing information. This takes effort and education… so will it actually happen?
An option is to look to reputable entities that already do this sifting work. The IPCC is the leading international body for the assessment of climate change. It was established by the United Nations Environment Programme and the World Meteorological Organization in 1988. It provides a clear scientific view on the current state of knowledge in climate change and its potential environmental and socio-economic impacts, as evaluated by thousands of specialist scientists drawn from all over the world, and subject to careful and transparent review processes. You can’t ask for much more.