Jump to content

What is the correct value of Climate Sensitivity?


Recommended Posts

The trouble is it is often no possible to compare past temperatures with today. All the proxies can show is how temperatures changed in the past. The step of translating a change in a proxy value into a number of degrees if often difficult and subject to bias.

But there is an overlap between instrumental data and the temperature reconstructions (ex. Mann et al. or Marcott et al.). Why can't one use the overlap to estimate the reconstructed data relative to the instrumental data?

I think you need to look at climate as a chaotic system where there are natural long term variations that are not a result of any "forcing". Obviously, external forcings will add to the long term variations and cause the system boundaries to move. IMO, it is wrong to assume that every wiggle over periods of 100-1000 years is "caused" by something.

I can see chaos causing significant climatic variations on the order of decades (although most of that can be accounted for by certain factors such as AMO, PDO or ENSO). But being significant on the order of centuries? I don't buy it. If you want to offer me a physical mechanism as to why this would occur, I would be interested. Also, in my models I allow for error, so I just need sufficient data to overcome the uncertainty that is due to this chaos.

Link to comment
Share on other sites

  • Replies 592
  • Created
  • Last Reply

Top Posters In This Topic

But there is an overlap between instrumental data and the temperature reconstructions (ex. Mann et al. or Marcott et al.). Why can't one use the overlap to estimate the reconstructed data relative to the instrumental data?

Marcott is an extremely low resolution series and the overlap consists of maybe two or three data points. You cannot do correlation with so little data. With Mann the correlation works when there is high resolution data in the modern period, however, this is only true for the proxies that go back to the 1500s. The proxies for the medieval period do not have useful data in the modern period and, as a result, it is not possible to calibrate. Mann is deceptive because it hides these differences and makes it look like the entire reconstruction is equally valid.

I can see chaos causing significant climatic variations on the order of decades (although most of that can be accounted for by certain factors such as AMO, PDO or ENSO). But being significant on the order of centuries? I don't buy it. If you want to offer me a physical mechanism as to why this would occur, I would be interested. Also, in my models I allow for error, so I just need sufficient data to overcome the uncertainty that is due to this chaos.

Auto-correlation. i.e. hot weather creates conditions for more hot weather or cold weather creates conditions for more cold weather. We see that with ice ages where expanding snow cover triggers a positive feedback that results in more snow cover. It is not a stretch to suggest that lesser magnitude effects can cause smaller 'random variations' over the course of centuries. Especially when we really don't really understand the ocean cycles. That said, I am speculating but it is at least a plausible physical mechanism and I have read some literature that makes the point by looking at the behavior of river systems (no link right now). Edited by TimG
Link to comment
Share on other sites

Marcott is an extremely low resolution series and the overlap consists of maybe two or three data points. You cannot do correlation with so little data. With Mann the correlation works when there is high resolution data in the modern period, however, this is only true for the proxies that go back to the 1500s. The proxies for the medieval period do not have useful data in the modern period and, as a result, it is not possible to calibrate. Mann is deceptive because it hides these differences and makes it look like the entire reconstruction is equally valid.

That could be problematic. Perhaps it is better to do a separate time series analysis for each temperature data set, rather than try to combine them and just use weighting techniques.

Auto-correlation. i.e. hot weather creates conditions for more hot weather or cold weather creates conditions for more cold weather.

Auto-correlation doesn't imply chaotic temperature variations that are significant on the order of centuries. The rate at which the Earth tends to equilibrium seems to me to be too fast for that (the deep oceans seem to warm up relative to surface temperatures on the order of decades).

We see that with ice ages where expanding snow cover triggers a positive feedback that results in more snow cover.

Yes there is a positive feedback, but ice ages are ultimately caused by Milankovitch cycles. The ice ages aren't a result of chaos.

I was actually trying to point out a similar thing to some alarmists on a blog last week. They were convinced that the James Hansen's Paleoclimate estimates of climate sensitivity were indisputable and unbiased. I tried to point out some basic things such as Hansen doesn't take into account Milakovitch cycles during the Pleistocene estimates, doesn't take into account the effects of plate-tectonics during the Cenozoic estimates, and does other things like arbitrarily rounds numbers up for more alarmisty conclusions or performs handwavy non-existent statistical analysis. Anyway, some social justice warrior without a science background game along, accused me of libel and started deleting any further posts I made(she could do this since she was a 'happiness engineer' with the blog company), so I gave up that conversation.

Especially when we really don't really understand the ocean cycles.

Well your longest timescale of variation for ocean cycles is probably the 70 year Atlantic Multidecadal Oscillation. And from my regressions and fourier analysis, the ~70 year AMO cycle seems to be more than just autocorrelation. And the impact of AMO is at most ~ +/- 0.1 C. I don't really see any chaotic variation on longer time scales with higher magnitudes.

Link to comment
Share on other sites

Yesterday, I finished compiling a data set that covered years 900-2012 and have forcing data that went back to year 0. I won't go in the details on the methodology because you guys keep telling me that my posts are too long. Anyway, I could not get statistically significant results and my estimates of climate sensitivity were consistently less than 1 C. Perhaps I should give up on trying to use a combination of paleo and instrumental data and just stick with the instrumental.

One more thing, maybe my models need an autocorrelation term for temperature to account for thermal inertia. In many cases my residual appears to possess autocorrelation (and the assumption of the statistical model is that the errors are uncorrelated); an autocorrelation term will help eliminate this problem.

Edit: One of the reasons I might not be able to get good results despite my attempts is because the numeric precision of double floating point numbers is not good enough. I'll see if I can use quad floating point numbers.
Edit 2: Actually, there was a slight error in my code, so I still might be able to get decent results with the 900-2012 time series. Although computational rounding error is becoming a big enough issue that I may want to avoid longer time series.

Edited by -1=e^ipi
Link to comment
Share on other sites

Perhaps I should give up on trying to use a combination of paleo and instrumental data and just stick with the instrumental.

It is probably best not to splice data. Splicing data is a risky process at the best of times. Data splices are frequently abused by climate scientists like Mann.
Link to comment
Share on other sites

My latest result for the 95% confidence interval for equilibrium climate sensitivity is [1.09,2.27]C with a best estimate of 1.57 C. This is using the monthly data from 1871-2012 and with decay times varying between 0.5 years and 128 years.

I've pretty much exhausted all possible explanations/excuses for why I might get such a low estimate of climate sensitivity. I estimate a much lower level of relative solar forcing to GHG forcing than Van Hateren (which I argued should be the case since the effect of solar forcing is greater in equatorial regions) and I properly take SO2 aerosols into account. I detrend ENSO and AMO in a way that avoids much specification error.

It is a bit strange to get a lower sensitivity estimate than Van Hateren despite having a lower estimate of relative solar forcing (to GHG forcing), but there might be 2 possible reasons. Van Hateren starts his forcing data in 1800, where as I start my forcing data in 1700; solar irradiance and greenhouse gases increased slightly from 1700-1800, which means that some of the warming observed from 1870-2012 is due what occurred in the 1700s; Van Hateren doesn't take the 1700s into account. Another reason is that I have a longer data set (1870-2012 vs 1880-2005); the 1870s were relatively warm and the last decade was relatively cool, thereby giving a lower estimate of climate sensitivity.

I'm just going to conclude that I don't see a climate sensitivity greater than 3C in the instrumental data. I've tried to see it, I've tried to account for all relevant variables, and I've tried many model specifications. But as far as I can tell, an equilibrium climate sensitivity greater than 3C is excluded by the instrumental data. This means that the upper half of the IPPC's climate sensitivity range is excluded by the instrumental data. Interestingly this upper half is what is predicted by the GCMs, which the IPPC then uses to calculate warming based on emission scenarios.

Edited by -1=e^ipi
Link to comment
Share on other sites

I'm wondering if it is possible to reconcile these low estimates (say 1.5-2.0C) with what I said in post #14:


Also, another reason why I'm really skeptical of these very low (less than 2°C) estimates of climate sensitivity is because they would suggest a huge divergence between what is expected by basic physics and what is observed.

http://www.globalwar...warming eqn.pdf

The warming effect from CO2 alone suggests and equilibrium climate sensitivity of 1.15 °C.

The add water vapour feedback effect increases this by about 0.59°C to 1.74 °C.

Edit: this is a mistake. the water vapour feedback would be an additional 1.14 °C, not 0.59°C, which brings sensitivity to 2.29 °C.

The next main feedback is the lapse rate feedback, which is negative (and amplified by the positive water vapour feedback). From what I have read in the literature, a lapse rate feedback of -0.5C is plausible. After that, there is the cloud albedo feedback, which might be negative, but most indications suggest it is slightly positive.

I guess if your main feedbacks are water vapour and lapse rate, and that all other feedbacks are negligible on the order of 100 years, then an equilibrium climate sensitivity as low as 1.5C might be plausible.

Edited by -1=e^ipi
Link to comment
Share on other sites

If the instrumental data excludes equilibrium climate sensitivities above 3 C, then this leads to the question why the GCMs are consistently overestimating climate sensitivity. Two possibilities are 'Just Cause Corruption' and 'Confirmation Bias'. With respect to 'Just Cause Corruption', look at what Rachendra Pachauri, the IPCC chair from 2002-2015 said in his resignation letter:

"For me the protection of Planet Earth, the survival of all species and sustainability of our ecosystems is more than a mission. It is my religion and my dharma."

http://judithcurry.com/2015/03/03/ipcc-in-transition/

So the chair of the IPCC viewed climate change as a religion...

Link to comment
Share on other sites

If the instrumental data excludes equilibrium climate sensitivities above 3 C, then this leads to the question why the GCMs are consistently overestimating climate sensitivity.

You also need to think about how models are "validated" by comparing them to existing models. Models which come to different conclusions are presumed to be "wrong" and since all models depend on assumptions it is not hard to find assumptions that can "explain" why an outlier is "wrong". This puts tremendous pressure on modellers to tweak their models until they are consistent with existing models.

It is a classic case of group think that is re-enforced by factors like "noble cause corruption".

Edited by TimG
Link to comment
Share on other sites

You also need to think about how models are "validated" by comparing them to existing models. Models which come to different conclusions are presumed to be "wrong" and since all models depend on assumptions it is not hard to find assumptions that can "explain" why an outlier is "wrong". This puts tremendous pressure on modellers to tweak their models until they are consistent with existing models.

Yes. Defiantly there is a confirmation bias to want to obtain models that agree with the predictions made by other models.

Link to comment
Share on other sites

Okay, so I thought I would try looking at a different time period (the Holocene) to see longer term effects of climate change.

So I'll use the Marcott Holocene temperature data: http://www.sciencemag.org/content/suppl/2013/03/07/339.6124.1198.DC1/Marcott.SM.database.S1.xlsx

Over the Holocene (before per-industrial times) there are primarily 3 factors that explain temperature changes over this time period. Greenhouse gases, changes in solar irradiance, and the effect of the Milankovitch cycles.

For greenhouse gases, CO2, CH4 and NO2 are the only relevant ones. Over the Holocene period, greenhouse gas reconstructions from Ice Core samples in Dome C, Antarctica seem to be the most accurate.
CO2 data: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/antarctica2015co2.xls
CH4 data: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc-ch4-2008.txt

N2O data: ftp://ftp.ncdc.noaa.gov/pub/data/paleo/icecore/antarctica/epica_domec/edc-n2o-2010-800k.txt

I cubic interpolate each GHG for each of the years of temperature reconstructions and then calculate the greenhouse gas forcing.

For solar irradiance, I use the 9400 year Steinhilber reconstruction.
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/climate_forcing/solar_variability/steinhilber2012.txt

For Milankovitch cycles, I get eccentricity, obliquity and precession data from http://biocycle.atmos.colostate.edu/shiny/Milankovitch/. I interpolate for eccentricity, obliquity and precession data for each of the years of temperature reconstructions. Using this I can calculate the daily direct solar irradiation for any latitude for each year at any latitude and at any time of year. Using a matlab code I wrote, I obtain 3 parameters that I will use to represent the effects of Milankovitch cycles:

- Average Annual Solar Irradiance for Earth. This is proportional to 1/sqrt(1 - eccentricity^2). If the Earth has a more eccentric orbit it receives more sunlight in a year.

- Standard Deviation in Average Annual Solar Irradiance. This depends primarily on obliquity; as the Earth's axis becomes more tilted, the poles receive more sunlight and the distribution of sunlight is more evenly spread over the surface of the Earth. A more even spread of sunlight should lead to a higher global temperature.

- A factor that represents the difference in summer insolation between the Northern Hemisphere and the Southern Hemisphere. This depends primarily on precession. As the Northern Hemisphere is more sensitive than the Southern Hemisphere, this factor should have a positive effect on temperature.

The temperature data is one measurement per 20 years. As a result, I should avoid using decay times much shorter than 20 years. In addition, The Earth reaches Earth System Sensitivity equilibrium in roughly 2000 years (this is what I gather based on various papers I have read), so I should avoid decay times much longer than 1000 years. So following the approach by Van Hateren, I'll go with decay times of 16, 64, 256 and 1024 years for the impulse response function.

I should avoid the post industrial era as other factors explaining temperature become relevant and I should avoid overlap with my monthly data that starts in the 1870's, so I'll use 1860 as my last data point. The solar data beings 9400 years ago and I should leave at least 1024 years between the beginning of the solar data and the first data point I use in my estimation of climate sensitivity since 1024 years is my longest decay time. Therefore, I'll use -6120 as my first data point. Year -6120 is during the Holocene optimum, so is arguably roughly in equilibrium, and this gives me 8000 years of data in total.I'll use an earlier point in time for the start of the forcing data (the interpolated solar forcing data starts in -7440 and I'll start the GHG & Milankovitch forcings in -9340).

My model is:

T(Y) =
Sum(s = 1 to 4; γs*((Sum i = -6120 to Y-1 using 20 year intervals; dGHGis(i,i) + ... + dGHG-9340s(i,-9340))
+ β1*(Sum i = -6120 to Y-1 using 20 year intervals; dSis(i,i) + ... + dS-7440s(i,-7440))
+ β2*(Sum i = -6120 to Y-1 using 20 year intervals; dM1is(i,i) + ... + dM1-9340s(i,-9340))))
+ β3*(Sum i = -6120 to Y-1 using 20 year intervals; dM2is(i,i) + ... + dM2-9340s(i,-9340))))
+ β4*(Sum i = -6120 to Y-1 using 20 year intervals; dM3is(i,i) + ... + dM3-9340s(i,-9340))))
+ β0 + errorY

It's basically the same model I had before, except using longer 20 year time intervals, omitting all variability indices, and using 3 Milankovitch 'forcings' (M1, M2, and M3) and no Aerosol forcings.

Note that the estimate of climate sensitivity I get from the above model should correspond to the Earth System Sensitivity, which is larger than the Equilibrium Climate Sensitivity.

Anyway, computing the model gives me a 95% confidence interval of [0.60,7.63]C with a best estimate of 2.14 C.

Edited by -1=e^ipi
Link to comment
Share on other sites

  • 2 weeks later...

Sorry - correction. Re evaluated the formulas and ran it through my greatest resource - Deep Thought ;

THE ANSWER IS ........... 42!

:lol:

Is trolling a thread allowed according to the forum rules?

Is loading up a general forum with reams of mathematical formulas much different than trolling? I don't care whether or not you get all this published or want it published and peer-reviewed. I want to know what it has to do with the real life climate model that we all have to live in? That has been the primary criticism of Richard Lindzen and the rest of the very small minority of climate research claiming low carbon sensitivity - why doesn't the new models match historic evidence?

Right now, glaciers are melting...even in the Antarctic...CO2 levels are accelerating at the same time we are being told that human carbon emissions have fallen...other negative effects of carbon like ocean acidification, aren't addressed, and rainforests and every other non-human related ecological niches are in rapid decline....hence the accelerating rates of species extinctions. So, we may have lots of environmental crises all happening together at the same time....besides rising CO2 levels. So, whether our atmosphere is higher/or lower than previous estimates on sensitivity to carbon, will it even matter that much and why?

Link to comment
Share on other sites

:lol:

Is loading up a general forum with reams of mathematical formulas much different than trolling? I don't care whether or not you get all this published or want it published and peer-reviewed. I want to know what it has to do with the real life climate model that we all have to live in? That has been the primary criticism of Richard Lindzen and the rest of the very small minority of climate research claiming low carbon sensitivity - why doesn't the new models match historic evidence?

Right now, glaciers are melting...even in the Antarctic...CO2 levels are accelerating at the same time we are being told that human carbon emissions have fallen...other negative effects of carbon like ocean acidification, aren't addressed, and rainforests and every other non-human related ecological niches are in rapid decline....hence the accelerating rates of species extinctions. So, we may have lots of environmental crises all happening together at the same time....besides rising CO2 levels. So, whether our atmosphere is higher/or lower than previous estimates on sensitivity to carbon, will it even matter that much and why?

I dont know if its trolling or not, but it certainly seems to make most people just ignore the entire thread.

Link to comment
Share on other sites

Right now, glaciers are melting...even in the Antarctic...CO2 levels are accelerating at the same time we are being told that human carbon emissions have fallen...other negative effects of carbon like ocean acidification, aren't addressed, and rainforests and every other non-human related ecological niches are in rapid decline....hence the accelerating rates of species extinctions.

Yet no one can demonstrate that any of these things is a concern without using computer models which have a horrible track record when it comes to predicting simple things like temperature change. Why should be believe that the predictions of doom are any more credible when dealing with much more complex phenomena?
Link to comment
Share on other sites

Yet no one can demonstrate that any of these things is a concern without using computer models which have a horrible track record when it comes to predicting simple things like temperature change. Why should be believe that the predictions of doom are any more credible when dealing with much more complex phenomena?

I dont think anyone needs some long winded nonsensical endless ream of formulae to figure out that ice, such as in glaciers, melts only when the temperature rises above freezing. A lot of people get their water from glacial fed rivers. As they melt faster than they can be restocked due to those warmer temps., you get flooding followed by drought. Take a look at Pakistan for instance.

Link to comment
Share on other sites

Is loading up a general forum with reams of mathematical formulas much different than trolling?

Yes.

at the same time we are being told that human carbon emissions have fallen...

CO2 emissions have not fallen.

other negative effects of carbon like ocean acidification

Ocean acidification occurs due to CO2 being taken out of the atmosphere and being dissolved in the oceans. So there is a tradeoff here. The more ocean acidification, the less global warming. Ocean uptake of atmospheric CO2 has a decay time of on the order of centuries. Current ocean pH is about 8.14. By 2100, it might be like 7.82 at worst. A significant change but not the end of the world. Ocean pH has been far lower in the past when atmospheric CO2 levels were many times greater than they are now and ocean pH was even lower, yet life thrived.

rainforests and every other non-human related ecological niches are in rapid decline....

They aren't in decline due to fossil fuel emissions.

So, whether our atmosphere is higher/or lower than previous estimates on sensitivity to carbon, will it even matter that much and why?

Why wouldn't it matter? If climate sensitivity is say 30% lower than what the IPCC is claiming, then that means there will be 30% less warming, which means that the appropriate policy response can be quite different. And the IPCC claims and any mainstream climate science position of climate sensitivity is still orders of magnitude lower than what idiot alarmists like Obama think it is.

"The planet will boil over"

Let's take a very high climate sensitivity (say 4.5C, the upper range of the IPCC's confidence interval). 'Pre-industrial' atmospheric CO2 is roughly 278 ppm (and temperature global 'pre-industrial' temperature is roughly 14C). So in order to reach a global average temperature of 100 C you would need 2^((100-14)/4.5)*278 = 1.57*10^8 ppm. In other words, you would need 157 atmospheres of CO2 (not 157 times current CO2s but 157 earth atmospheres that consist entirely of CO2) to reach the boiling temperature of water. But this wouldn't be enough to boil water because as atmospheric pressure increases, the temperature at which water boils increases. So to actually 'boil' the Earth, you would need to reach a temperature of ~374 C, which would require 2^((374-14)/4.5)*278 = 3.36*10^26 ppm, or 3.36 * 10^20 atmospheres of CO2.

Realistically, you can't go beyond 21% atmospheric CO2 because only 21% of our atmosphere is O2. And you could even get close to 21% because there simply is not enough fossil fuel reserves. My understanding is that there might be enough fossil fuel reserves to maybe reach 2000 ppm (or 2% CO2), which is significant, and would have devastating consequences to the environment, but is orders of magnitude different from what some 'environmentalists' would have you believe.

Link to comment
Share on other sites

A lot of people get their water from glacial fed rivers. As they melt faster than they can be restocked due to those warmer temps., you get flooding followed by drought. Take a look at Pakistan for instance.

Even if what you claim is true (it is a gross misrepresentation of the issue), this is an example where an obvious technology exists (build dams to collect water). This means this issues does not equate with a "prediction of doom". Edited by TimG
Link to comment
Share on other sites

Even if what you claim is true (it is a gross misrepresentation of the issue), this is an example where an obvious technology exists (build dams to collect water). This means this issues does not equate with a "prediction of doom".

Sounds good. Oh except we look pretty stupid sitting their by our dam, after the glacier has disappeared and there is no more water to be caught by our dam.

Link to comment
Share on other sites

Sounds good. Oh except we look pretty stupid sitting their by our dam, after the glacier has disappeared and there is no more water to be caught by our dam.

Glaciers don't provide fossilize water (if they did it would be a non-renewable resource). They store water during the rainy season as ice and release it during the dry season. i.e. they do exactly what a dam could do.
Link to comment
Share on other sites

Glaciers don't provide fossilize water (if they did it would be a non-renewable resource). They store water during the rainy season as ice and release it during the dry season. i.e. they do exactly what a dam could do.

They store water on top when there is precipitation and cold enough temps to freeze it. They emit water from the bottom when its warm enough to melt. When you unbalance this process ad have more time of the latter and less of the former, eventually its bye bye glacier. Which is already occurring and increasing sea levels.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...