Jump to content

What is the correct value of Climate Sensitivity?


Recommended Posts

I was thinking about how I should incorporate the variations in the length of day into the model. The problem with length of day variations is it has a very different mechanism for affecting climate than most of the other factors I am considering, and therefore there is little reason to believe that climate should respond to length of day variation on the time scale as other factors. The length of day fluctuations result from a transfer of angular momentum between the Earth's core+mantle and the Earth's crust (also the tidal effects of the moon are extending the length of the day by 1.4 ms per century, but that is more or less a constant affect). Where as the other major factors that could cause climate change (changes in CO2, changes in solar irradiance, volcanic aerosols, etc) do it by affecting the radiative balance of earth and therefore are expected to have similar time scales of response.

So I thought I would try to see if the Earth's temperatures respond to length of day variation, and if so what the time scale of response is (and if the time scale of response is comparable to the time scale of response to changes in other factors).

To do this, I first have to detrend temperature data on other factors. I decided to do this using a relatively simple model:

dT(t)/dt = (A*dln(CO2(t))/dt + B*dTSI(t)/dt + C*dAOD(t)/dt)

+ k*(A*dln(CO2(t-1))/dt + B*dTSI(t-1)/dt + C*dAOD(t-1)/dt)
+ k2*(A*dln(CO2(t-2))/dt + B*dTSI(t-2)/dt + C*dAOD(t-2)/dt)
+ k3*(A*dln(CO2(t-3))/dt + B*dTSI(t-3)/dt + C*dAOD(t-3)/dt)

+ ... all the way to infinity

where T is the temperature, CO2 is the atmospheric CO2 concentration, TSI is the total solar irradiance, and AOD is the average optical depth at 550 nm (basically is a proxy for volcanic aerosols). Basically, the idea of this model is that there is a simply exponential impulse response in temperature to a change in forcing (with decay time -1/ln(k)). Now I know that an exponential response is unrealistic, as I explained earlier on this page. But I am just trying to perform a simple detrending so that I can see if the response time to Length of Day variances is similar. The above equation can be simplified to:

dT(t)/dt = A*dln(CO2(t))/dt + B*dTSI(t)/dt + C*dAOD(t)/dt + k*dT(t-1)/dt

This is very similar to the model of post #90, but might be a bit more intuitive.

Anyway, I tried to use the data from 1850-2008 to estimate the parameters (though I am mostly interested in the residual).

My 95% confidence intervals for B and k are 1.21 +/- 0.40 and 0.56 +/- 0.12 respectively.

I can get a rough estimate of equilibrium climate sensitivity from this since ECS should be B/(1-k)*ln(2) in this model. This gives a climate sensitivity of (1.91 +/- 0.82) C. Of course, one of the reasons for such a low ECS is that the assumption of an exponential response gives an unrealistically low decay time (~1.72 years).

In any case, after the above regression, the residual is the change in temperature that has been detrended of other factors. Next I detrend the length of day variation data so that it has zero average slope over the 1850-2008 period. I can then perform the regression:

dR(t)/dt = D*dLOD(t)/dt + h*dR(t-1)/dt

where R is the residual of the first regression, and LOD is the variation in the length of day. I want to test if h is similar to k (because if it is, then I can treat the length of the day similar to how I treat solar irradiance, CO2 or volcanic aerosols). I get a 95% confidence interval for h to be -0.10 +/- 0.15. So not only is h different from k, but h might even be negative. A negative h indicates that one cannot treat climate responses in length of day variation as an exponential response.

Some scientists have suggested that the changes in temperature lag changes in LOD by 6-7 years and that changes in temperature move in the opposite direction as changes in the variation in the LOD. Perhaps this is due to: an increase in the LOD means the angular momentum of the Earth's crust reduces, which means that the rotational energy of the Earth's crust reduces. By conservation of energy, perhaps this 'missing' energy is turned into heat, which travels slowly from the bottom of the Earth's crust to the surface. This would take a long time, perhaps even 6-7 years (I don't know though).

If this is the case, then perhaps I should simply lag LOD by 6 years.

As a final test, I compared how well LOD fits to the residual for 5, 6 and 7 year lags. The 6 gives a better fit than 5 or 7.

Edited by -1=e^ipi
Link to comment
Share on other sites

  • Replies 592
  • Created
  • Last Reply

Top Posters In This Topic

I know what you mean. I have first hand experience with corruption in science.

Yes but where is the documented evidence of the apparently globe spanning corruption that has gripped climate science? If you've seen it first hand shouldn't you be reporting it? Would you if it had some sort tangible impact on your life other than an annoyance? Take fishermen for example. Like many, I've suffered actual economic consequences because of corrupt science. I've even had opportunity stripped away when DFO cited climate change concerns which for all I know were apparently false and fraudulently presented as a justification to corruptly reallocate opportunity towards more powerful influential fishing sectors.

Given how widespread the corruption of climate science is said to be I'm left wondering why are we not seeing climate scientists being sentenced like other corrupt scientists on a daily basis? Take this charmer for example - Former DFO employee charged with fraud to be sentenced in November.

Scruton is the former head of science for the Department of Fisheries and Oceans in St. John's. He has in excess of 30 years of experience as a research scientist and research manager working with federal and provincial governments and as an environmental consultant. He led and managed a number of large national and international programs on fish habitat research and conducted collaborative research with the utility sector, pulp and paper companies, fishing industry, and transportation sector.

This guy is probably single handedly responsible for killing more fish than any number of fishermen that might have relied on those fish for their livelihoods.

So how many other science's are contaminated in damagingly similar fashion like the science of economics or political science for example?

Edited by eyeball
Link to comment
Share on other sites

@ TimG -

Are you fine with the 1959-2014 time period? Data should be fairly reliable during this time period and less subject to 'adjustments'. I'm trying to get enough monthly data sets to estimate climate sensitivity using a time series approach that combines the Van Hateren, CSALT and my earlier approaches. I think I have most of the data sets:

temperature:

http://www.cru.uea.ac.uk/cru/data/temperature/HadCRUT4-gl.dat

CO2:ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt

Volcanic Aerosols: http://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

Atmospheric Angular Momentum: http://www.esrl.noaa.gov/psd/data/correlation/glaam.data.scaled

Southern Oscillation Index: http://www.ncdc.noaa.gov/teleconnections/enso/indicators/soi/data.csv

Pacific Decadal Oscillation Index: http://www.ncdc.noaa.gov/teleconnections/pdo/data.csv

North Atlantic Multidecadal Oscillation index: http://www.esrl.noaa.gov/psd/data/correlation/nao.data

Though I am stuck on a few things. I don't think monthly length of day data going back to 1959 is available (I can find monthly data starting in 1990 at the earliest). Monthly instrumental solar irradiance data starts in 1978, though there are lots of reconstructions and land based measurements. Monthly solar flux data is available though http://www.esrl.noaa.gov/psd/data/correlation/solar.data. I'm also a bit unsure what to do about nitrous oxide and methane as they are also important 'green house gases', I might have to track down some data sets...

http://www.epa.gov/climatechange/science/indicators/ghg/ghg-concentrations.html. Also, the volcanic aerosol data I have ends in September 2012.

Link to comment
Share on other sites

I guess I should use CH4 and N2O data as well. Using the IPCC's radiative forcing formulas, I calculate that only about 78% of the change in radiative forcing for greenhouse gases from 1959-2014 was due to CO2 (excluding water vapour of course).

Edited by -1=e^ipi
Link to comment
Share on other sites

As has been stated over and over again... climate and weather are separate things.

It's pretty clear to me that people are reposting bad information on purpose now. There's no way that that message could not have sunk in yet.

I will make my response clear. In populated areas of the world there has been no perceptible change. The arguments about "climate change" thus focus on the very high latitudes where very few people who write and keep records live. Proxy data, prone to just about every form of manipulation and distortion known to man, substitutes for both measurements and human observation. Weather observations are a check on hypotheses concerning climate.
Link to comment
Share on other sites

Okay, so I've computed some green house gas forcing time series data from 1954-2013 to use in a future regression. I am only considering CO2, CH4 and N2O. Here is my methodology:

For methane, I use Cape Grim, Tasmania, Australia Instrumental data from 1985-2013 ftp://ftp.cmdl.noaa.gov/data/trace_gases/ch4/flask/surface/ch4_cgo_surface-flask_1_ccgg_month.txt and I use from 1959-1984 ice core data from Antarctica and Greenland http://cdiac.ornl.gov/ftp/trends/atm_meth/EthCH498B.txt. There is a slight overlap in the data sets from 1985-1992. I take the difference in the two data sets over 1985-1992 and perform a linear regression. The results of the linear regression suggest that in 1985, the ice core data is ~24.12 ppb greater than the cap grim data. To make the data sets comparable, I lower all the ice core data by 24.12 ppb.

Then, I take all of the annual data (monthly methane data is available for 1985-2013, but I want to have the data consistent over both periods and I wish to detrend seasonal effects, so I ignore this) and perform cubic-spline interpolation to get estimates of seasonally detrended monthly atmospheric methane.

For N2O, I use global methane data from 2000-2014 ftp://ftp.cmdl.noaa.gov/hats/n2o/insituGCs/CATS/global/insitu_global_N2O.txt and I use snowpack Antarctica data from 1958-2004. ftp://daac.ornl.gov/data/global_climate/global_N_cycle/data/global_N_perturbations.txt . Unfortunately, the data prior to 2000 is very inaccurate and has missing years, so I perform a quadratic regression over the snowpack data to get estimates of atmospheric N2O during those years. As I did with methane, I look at the overlapping years (2000-2004) and I perform a linear regression of the difference in the estimates. The linear regression suggests that in 2000, the snowpack data is 0.0165 ppb greater than the global instrumental data. To make the data sets comparable, I therefore decrease the estimates of the snowpack data by 0.0165 ppb. Finally, like with the methane data, I take these annual estimates and perform cubic-spline interpolation to get estimates of monthly seasonally detrended atmospheric N2O.

Next, I take the monthly detrended N2O & methane data plus the monthly detrended atmospheric CO2 data (ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt) and I compute the combined estimated change GHG forcing compared to 1750 for these 3 gases using the IPCC’s 2001 formula (http://www.esrl.noaa.gov/gmd/aggi/aggi.html).

Edited by -1=e^ipi
Link to comment
Share on other sites

Update on the data compilation:

For total solar irradiance, I take annual total solar irradiance for 1958-2012 (http://lasp.colorado.edu/lisird/tss/historical_tsi.csv?&time%3E=1958-06-29&time%3C=2013-06-29). I then interpolate for estimates of monthly total solar irradiance. In addition, I use monthly solar flux data (http://www.esrl.noaa.gov/psd/data/correlation/solar.data) as well since solar flux might be more strongly correlated with cosmic rays (also, this data set is monthly, so might catch things that the annual data will miss).

For annual length of day data, I detrend it by 1.7 ms per century since this is the long run trend (http://en.wikipedia.org/wiki/Tidal_acceleration#Quantitative_description_of_the_Earth.E2.80.93Moon_case). I then lag the data by 6 years for reasons explained earlier. Finally, I interpolate to get monthly data.

For AAM, I detrend any seasonal effects. I did the same with the SOI. Now I could use the MEI (Multivariate ENSO Index http://www.esrl.noaa.gov/psd/data/correlation/mei.data

) instead of the SOI. The MEI supposedly better represents ENSO. The problem with MEI is that it uses surface temperature data in its index, so the MEI might be correlated with global warming. Looking at the data set, there does seem to be a clear positive trend. I have detrended the MEI (both for seasonal variation and for long term, although the MEI claims to be seasonally detrended already), but it is probably better to use SOI since SOI does not use temperature data and has no long term trend.

I have also seasonally detrended the PDO index. Like the MEI, PDO does use temperature data so there is some concern that it might be correlated with global warming. However, if I look at the trend from similar phases in the pacific decadal oscillation (say from 1954-2013) then there does not seem to be a long term trend in the PDO, so using the seasonally detrended data is probably fine. Lastly, I have seasonally detrended the AMO index (which like the SOI has no long term trend).

I also recalculated the adjusted HadCRUT4 temperature data for 1959-2014 (accounting for the linear trend). For the volcanic aerosol data, it doesn’t need to be detrended.

Anyway, this means I have sufficient monthly data from January 1959 – September 2012 to perform calculations.

Link to comment
Share on other sites

I will make my response clear. In populated areas of the world there has been no perceptible change. The arguments about "climate change" thus focus on the very high latitudes where very few people who write and keep records live. Proxy data, prone to just about every form of manipulation and distortion known to man, substitutes for both measurements and human observation. Weather observations are a check on hypotheses concerning climate.

1. Temperatures at all points of the globe need to be measured, not just populated areas. This is very obvious so the point is just noise.

2. Proxy data is 'prone' to manipulation - yes - but no one is seriously suggesting that that has happened. It's a conspiracy theory, so this is also noise.

3. Weather observations are not "a check" any more than anecdotal evidence is a check on real data. Again, this is obvious to anybody who understands science so it's noise.

There are actually serious questions about the economic response to climate change - why don't you focus on that instead of generating noise ?

Link to comment
Share on other sites

2. Proxy data is 'prone' to manipulation - yes - but no one is seriously suggesting that that has happened. It's a conspiracy theory, so this is also noise.

People who are extremely skilled with statistics have looked at some papers using proxies and have shown they are dishonest junk. The fact that you refuse to acknowledge the evidence does not make it a conspiracy theory (it says more about you than the people pointing out the problems). The fact that the scientific establishment defends papers which are clearly junk undermines their credibility when it comes to other questions (i.e. if they are too stupid/ideological to acknowledge obvious junk then why should their judgement be trusted when comes to more complex questions like climate models?) Edited by TimG
Link to comment
Share on other sites

1. Temperatures at all points of the globe need to be measured, not just populated areas. This is very obvious so the point is just noise.

It's impossible to measure temperatures at all infinity points on the globe. You just need sufficient global coverage (which we have), then you can interpolate to get an estimate. Satellite data helps too.

Link to comment
Share on other sites

I’ll try to go over the basic idea of the model. Suppose initially the climate is in equilibrium and that you have a small change in forcing dF at time t = 0, and that overtime the climate will decay exponentially towards equilibrium with a decay time of τ. Then the temperature as a function of time after t = 0 will be

T(t) = T(0) + (γdF)(1 – exp(-t/τ)), where γ is a constant that gives the equilibrium change in temperature when multiplied with the change in forcing dF.

For a month that occurs after time t = 0, one can calculate the average temperature of this month. If the start of month i is starti and the end of the month is endi, then the average temperature of this month is:

Integral(t = starti to endi; T(0) + (γdF)(1 – exp(-t/τ))dt)/(endi - starti)
= T(0) + (γdF)(1 + τ/(endi - starti)*(exp(-endi/τ) – exp(-starti/τ)))

For simplicity, let’s call μ(starti, endi,dF) the average temperature of month i given the forcing dF at t = 0.

Let dμ(starti, endi, endi+1,dF) be the change in average temperature between two consecutive months i and i+1. If starti occurs after t = 0 then dμ will equal μ(endi, endi+1,dF) - μ(starti, endi,dF). If endi+1 occurs before t = 0 then dμ = 0. And if endi occurs at t=0 then dμ = μ(endi, endi+1,dF) - T(0). Note that dμ is proportional to both γ and dF. Therefore, we can write dμ(starti, endi, endi+1,dF) = γ*dF*η(starti, endi, endi+1), where η(starti, endi, endi+1) depends on the month i.

To generalize things a bit, let’s suppose that the change in forcing dF, doesn’t happen at t = 0 but at the end of month j. Then the change in temperature between two consecutive months i and i+1 due to a change in forcing that occurred between months j and j+1 is γ*dF*η(starti – endj, endi – endj,endi+1 – endj). To make the notation a bit simpler, let ρ(i,j) = ρ(starti, endi, endi+1,endj) = η(starti – endj, endi – endj,endi+1 – endj).

Now let’s make things more realistic. Suppose that there is still only one feedback response to a change in forcing and that this feedback response is exponential with decay time τ. Suppose that for every month the forcing is constant, but between consecutive months, say month j and month j+1, the forcing changes by dFj. Suppose that we know dFj for all integers j and we wish to calculate the change in temperature from month i to month i + 1. Then the change in temperature from month i to month i + 1 will be γ*(dFi*ρ(i,i) + dFi-1*ρ(i,i-1) + dFi-2*ρ(i,i-2) + ...). That is, the change in temperature from month i to month i+1 will depend on the forcing change from month i to month i + 1 as well as all earlier forcing changes.

Now let’s make this more realistic. We do not have data going all the way back to infinity. Suppose we have data that only goes back as far as month m. Note that for all temperature changes after month m, since the response decays exponentially towards equilibrium, for all practical purposes we can represent all the forcing changes that occurred before month m as an unknown characteristic forcing dG that occurs at the end of month m-1. This gives that the change in temperature from month i to month i + 1 is γ*(dFi*ρ(i,i) + ... + dFm*ρ(i,m) + dG*ρ(i,m-1)).

Now let’s make things even more realistic. Realistically, there is not just a single exponential response with a single decay time, but many exponential response times with different decay times (maybe even a continuum of decay times). Following the suggestion of Van Hateren, it might be reasonable to approximate the true impulse response function with a finite number of exponential response functions.

The fastest decay time of a response is approximately half a year, so let τ1 = 0.5 years be the first decay time. In addition, Van Hateren suggested that a factor of 4 between consecutive decay times might be reasonable enough. So let τ2 = 2 years, τ3 = 8 years, τ4 = 32 years and τ5 = 128 years be the next 4 decay times. Finally, since 1959-2012 only covers a time span of 54 years, it probably isn’t a good idea to go higher than a decay time of 128 years since the data won’t be long enough to distinguish between higher decay times, so let’s just stick with the above 5 decay times. Also, this coverage of decay times from 0.5 years to 128 years should be sufficient enough to give a reasonable approximation of the equilibrium climate sensitivity. It is also worth pointing out that the decay time of ocean uptake of additional atmospheric CO2 is on the order of 100 years, which suggests that a 128 year decay time should be enough to estimate the equilibrium climate sensitivity.

Let γs be the γ associated with each decay time τs, ρs be the ρ associated with decay time τs and dGs be the characteristic change in forcing at the end of month m-1 for decay time τs. Then the change in temperature from month i to month i + 1 becomes:

Sum(s = 1 to 5; γs*(dFis(i,i) + ... + dFms(i,m) + dGss(i,m-1)))

Now let’s make this more realistic. In reality, there is not just one factor that can cause a change in radiative forcing for Earth, but many. Furthermore, different types of radiative forcings may influence the Earth by different amounts. For example, an increase in solar forcing has a stronger effect in equatorial regions than polar regions where as an increase in CO2 forcing has a more even effect across the globe. Also, solar irradiance may have a strong negative correlation with cosmic rays. Therefore, one may want to allow for the possibility that different types of forgings have different magnitudes of impact on global temperatures.

I’ll assume that there are 4 types of factors that can change radiative forcing over the 1959-2012 time period: greenhouse gases, solar irradiance, cosmic rays and volcanic aerosols. Therefore, I have to introduce 3 unknown constants (call them Solar, Cosmic and Volcano) to allow for the possibility that global temperatures may respond different to these 4 types of forcings. Let dGHGj be the dFj due to greenhouse gases, dSj be the dFj due to changes in solar irradiance, dCj be the dFj due to changes in cosmic rays, and dVj be the dFj due to changes in volcanic aerosols. Then the change in temperature from month i to month i + 1 becomes:

Sum(s = 1 to 5; γs*(dGHGis(i,i) + ... + dGHGms(i,m)
+ Solar*(dSis(i,i) + ... + dSms(i,m))
+ Cosmic*(dCis(i,i) + ... + dCms(i,m))
+ Volcano*(dVis(i,i) + ... + dVms(i,m))
+ dGss(i,m-1)))

Now let’s try to account for natural variation to make things more realistic. Changes in radiative forcing are not the only reason why global temperatures may change. Global temperatures may change due to natural variation. To try to account for natural variation, I will use indices such as the variation in the length of day (LOD), atmospheric angular momentum (AAM), Southern Oscillation Index (SOI), Pacific Decadal Oscillation Index (PDO) and North Atlantic Multidecadal Oscillation Index (NAO). If one adds these factors, then the change in temperature from month i to month i+1 becomes:

Sum(s = 1 to 5; γs*(dGHGis(i,i) + ... + dGHGms(i,m)
+ Solar*(dSis(i,i) + ... + dSms(i,m))
+ Cosmic*(dCis(i,i) + ... + dCms(i,m))
+ Volcano*(dVis(i,i) + ... + dVms(i,m))
+ dGss(i,m-1)))
+ β1*dLODi + β2*dAAMi + β3*dSOIi + β4*dPDOi + β5*dNAOi

Where β1, β2, β3, β4, and β5 are unknown constants, dLODi corresponds to the change in the length of day from month i to month i+1, dAAMi corresponds to the change in atmospheric angular momentum from month i to month i+1, dSOIi corresponds to the change in the southern oscillation index from month i to month i+1, dPDOi corresponds to the change in the pacific decadal oscillation index from month i to month i+1, and dNAOi corresponds to the change in the north atlantic multidecadal oscillation index from month i to month i+1.

This equation has 18 unknowns (distributed over 30 terms). One can try to estimate the 18 unknowns by turning this equation into a regression equation and performing a non-linear regression.

Edited by -1=e^ipi
Link to comment
Share on other sites

Okay, I tried the above regression and got nonsense.

I tried simplifying it by removing the flux data (since it is strongly correlated with irradiance), but that didn't help much.

Since I'm using the Gauss-Newton method to try to estimate the model, maybe the problem is that my initial guess is too far off (I'm currently first performing a linear regression to get an initial guess).

Link to comment
Share on other sites

A few thoughts:

1. Why use months as a unit? They are so inconvenient... for pretty much everything. They are of non-constant length, the number of them in a year is neither a power of 2 or a power of 10, they do not correspond to a cyclical timescale that is relevant for climate, etc. Just start with years.

2.

The fastest decay time of a response is approximately half a month, so let τ1 = 0.5 years be the first decay time.

Why did you go from half a month to half a year?

3.

In addition, Van Hateren suggested that a factor of 4 between consecutive decay times might be reasonable enough.

This still seems entirely too arbitrarily, with no physical or mathematical justification. I've never seen this kind of approach used to approximate any other kind of function, and I've seen a LOT of this kind of stuff. Either use decay time constants that are based on real times associated with known physical phenomena, or use a linear combination of (the first n terms of) a full set of orthogonal functions that can describe any function in that space.

Okay, I tried the above regression and got nonsense.

Consider a typical cyclical climate phenomenon that has a 20 year period... you will never get a good fit through said cycle by summing exponentials of a few different time constants (0.5 years, 2 years, 8 years, 32 years), no matter what, because that set of exponentials does not fill the space you are working in. And, climate data has a whole range of cycles over varying timescales. So of course trying to fit the data with a couple exponentials which can't really ever describe periodic phenomena well will result in nonsense. Not only does your chosen set of functions not describe periodic phenomena well, neither would it be a good fit for anything of a different decay timescale. Consider something with a 20 year decay timescale... you will never get a decent fit through it with the above set of 5 exponentials.

The set of functions you should probably start from are the trigonometrics (sin and cos) which will nicely describe periodic phenomena and the hyperbolics (sinh and cosh) which will nicely describe decay and runaway phenomena. And the periods/time constants you should use would be to start with 1 year and fill the entire function space: so all integer values of years from 1 to n, where n is the longest timescale you want to consider. This of course gives you too many parameters to fit, but at least there would be some mathematical basis to expect a result that isn't nonsense.

Edited by Bonam
Link to comment
Share on other sites

Dont you guys ever get bored regurgitating this nonsense!

I havent but when I peruse this thread now I certainly see a one track mind. Remind you of Nero at all...

I suppose they think somebody is actually interested in these Xs and Os games. Thankfully real scientists are actually working on the problem

Its 42.

The bozos will be still playing xs and Os as the ship sails off the edge of the world. Thankfully there are real scientists that are working to head that off.

Thanks for your contribution. I'm sure the real scientists appreciate it. You can now go back to your regularly scheduled bullying of nerds in the schoolyard. Or maybe go back to the pipes thread (or better, a physics textbook) and figure out energy conservation before trying your hand at climate?

Edited by Bonam
Link to comment
Share on other sites

Thanks for your contribution. I'm sure the real scientists appreciate it. You can now go back to your regularly scheduled bullying of nerds in the schoolyard. Or maybe go back to the pipes thread (or better, a physics textbook) and figure out energy conservation before trying your hand at climate?

Ive already read the odd physics textbook back in college thanks.

Link to comment
Share on other sites

People who are extremely skilled with statistics have looked at some papers using proxies and have shown they are dishonest junk.

You're overstating the concerns. As of our last exchange on this matter, it was one piece of data in one paper on proxies that was objected to, and responded to - although it was dismissed. Even if I agreed with you on that objection, it's not enough for someone with more casual interest (like jbg) to say that the data is 'prone to manipulation', in other words people implying that the data is not to be trusted.

Link to comment
Share on other sites

You're overstating the concerns. As of our last exchange on this matter, it was one piece of data in one paper on proxies that was objected to, and responded to - although it was dismissed.

I picked one paper among 10+ to discuss because the issues were so blatant that someone with a basic level of statistics should have been able to understand why it was junk. You choose to minimize the issues with the paper for reasons I will not speculate on, however, despite your denials the paper is objectively junk and its conclusions are not supported by the data in the paper. It is not a case where reasonable people can choose to disagree. Numerous other papers have much more esoteric issues that are just as bad. The bottom line is proxies ARE prone to manipulation cannot really be trusted as long as the scientific establishment lets political considerations trump science.

Some of these issues are covered in much more depth in this book: http://www.amazon.ca/The-Hockey-Stick-Illusion-Climategate/dp/1906768358

However, I suspect you will will refuse to look at the book and instead rely on reviews by people who never read it to convince yourself that the issues are "minor" and not evidence that the field is rife with influential scientists that will decide what conclusions they want and manipulate the data until it gives them the conclusions they want.

Edited by TimG
Link to comment
Share on other sites

I picked one paper among 10+ to discuss because the issues were so blatant that someone with a basic level of statistics should have been able to understand why it was junk.

"junk" is a complete overstatement. One point on one paper, which was arguable I admit but doesn't turn over an entire science built on millions of data points.

The bottom line is proxies ARE prone to manipulation ...

That sentence isn't important unless you add context. It's like saying bank accounts can be hacked... are they being hacked or not...

However, I suspect you will will refuse to look at the book ...

A book named after a thoroughly discredited conspiracy theory ? No thanks.

Again - it's very easy for JBG to wave his hands around, and run around in a circle saying "data can be manipulated" but the danger is that other ridiculous people who vote may be taken in by ridiculous assertions.

Link to comment
Share on other sites

A book named after a thoroughly discredited conspiracy theory ? No thanks.

All that comment does is show how you are a blind ideologue who has no interest in understanding the complexities in the debate. That makes your opinion on JBG comments quite worthless.

Here is what people with an open mind have to say on it:

Many reviews have praised the book for its content, writing style and accessibility. Climatologist Judith Curry called The Hockey Stick Illusion "a well documented and well written book on the subject of the 'hockey [stick] wars.' It is required reading for anyone wanting to understand the blogosphere climate skeptics and particularly the climate auditors," such as Steve McIntyre and Ross McKitrick. She wrote that the book "presents a well reasoned and well documented argument".[12] Among those also praising the book was S. Fred Singer, who called it "probably the best book about the Hockey Stick."[13] Ross McKitrick wrote that "The best place to start when learning about the hockey stick is Andrew Montford’s superb book The Hockey Stick Illusion." [14] A number of other newspaper and magazine articles have praised the book, including reviews in Geoscientist,[15] Quadrant,[16] The Telegraph,[3][17] The Spectator,[1] Prospect magazine,[18] The Courier,[19] and the National Post.[20]

If you really worry about "what people think when they read JBG comments" you should start by trying to understand the issues instead of dismissing anyone who questions the quality of climate research as a conspiracy theorist.

Edited by TimG
Link to comment
Share on other sites

On the contrary. It shows that you will stand with fringe science if it scores political points.

There is no "fringe" science. There is good science and bad science. I care about the difference and judge people by the science supporting their claims. You, OTOH, judge people by their credentials and do not care if the science is good or bad.
Link to comment
Share on other sites

Thank you for your input Bonam.

1. Why use months as a unit? They are so inconvenient... for pretty much everything.

A few reasons.

Firstly, I want enough data to be able to estimate the model and good data starts in 1959. The Mauna Loa CO2 data starts around this time (if I want CO2 data before this, I would have to use Law Dome ice core data which is a lot more questionable). The atmospheric angular momentum, southern oscillation index, solar magnetic flux and north atlantic multidecadal oscillation index data sets start in the 50's. In some cases there are reconstructions, but those reconstructions are more questionable.Also, before the 50's, temperature data becomes a lot more questionable. The temperature data during the world wars isn't very good and the global temperature data from this time period has been adjusted many times.

Secondly, if I want to get a good estimate of the magnitude of response with a half year decay time then I probably want data with a higher temporal resolution that half a year. This is especially true if there are events like Pinotubo. Lastly, most of the data sets are given by month, so monthly data makes sense. Of course one has to take into account the unequal number of days, but that isn't impossible.

Though I might want to try the model using annual data from say 1876-2012. One thing I was thinking of doing is to use multiple time series data sets (say monthly from 1959-2012, annual from 1876-1958, annual from 1650-1876, and the entire Pleistocene data over the past 400,000 years) and estimate the parameters for each time period. The idea is that the later time periods would be better at estimating the responses with short decay times, where as the earlier time periods would be better at estimating the responses with longer decay times. Provided the data sets are independent, then one could combine the results over all time periods to get a decent estimate of the impulse response function. Of course, it is better to resolve the current issues I am having before complicating things.

Why did you go from half a month to half a year?

Sorry, that was a typo.

This still seems entirely too arbitrarily, with no physical or mathematical justification. I've never seen this kind of approach used to approximate any other kind of function, and I've seen a LOT of this kind of stuff. Either use decay time constants that are based on real times associated with known physical phenomena, or use a linear combination of (the first n terms of) a full set of orthogonal functions that can describe any function in that space.

I wouldn't say it has no physical justification. Certainly the 0.5 year decay time has a physical justification. And since many responses to a small change in forcing should have a roughly exponential response (this would follow if the rate of change of the response is proportional to the magnitude of displacement from equilibrium) it probably is reasonable to expect that the impulse response function be a sum of exponentials. The impulse response function should be Integral(τ = 0 to infinity; f(τ)*(1-exp(-t/τ))dτ), where f(τ) is some unknown function of the decay time. The problem is that f(τ) is unknown.

With respect to using decay time constants that are based on real times associated with known physical phenomena, I would like to do that. I tried to get a characteristic decay time of the Earth's heat sink towards equilibrium for example. But even in that, it proved difficult. In the case of the decay time of the Earth's heat sink, different heat sinks will have different decay times. A shallow ocean will have a faster response than a deeper ocean. And the surface of the ocean may have a faster response than the bottom of the ocean. As a result, rather than a single decay time, one gets a continuum of decay times with a large spread. So f(τ), rather than consisting only of a finite dirac deltas, is probably continuous.

With respect to using a full orthonormal set to describe any function in that space, there are an infinite number of them (1-exp(-t/τ)), where τ is any real number between 0 and infinity. So you cannot estimate them all. And no clear first n orthonormal terms exist. So the next best thing that you can do is have a finite set that is distributed over all of the relevant values of τ (say from 0.5 years to 128 years). The other thing is that you probably want a higher density of these exponential decay functions where you think there is a higher density of f(τ). In our case, the smaller values of τ are generally going to be more relevant; for example, if one doubles CO2, then one should expect that the 0.5 year decay time term results in an increase of global temperatures of about 1.15K, then the next fastest term might be the water vapour feedback, which results in an increase of global temperatures of say 1K, then one has the snow-albedo feedback which might have a response time of about a decade, then one has the ocean heat sink response that has a response time of about 6 decades and brings to total change in temperatures to the ECS of about 3K, then one has the millennia long responses in vegetation and ice sheets which bring the total change in temperature up to the Earth system sensitivity of about 4.2K, etc. This justifies having a greater representation of small values of τ compared to larger values of τ (so having the finite choice of τ geometrically spaced over the relevant values of τ makes might be okay).

So the Van Hateren approach is just a way to numerically determine the impulse response function from the data if you don't have much more a priori information about f(τ). Although a factor of 4 between consecutive τ might be too large. If I added terms with decay times of 1 year, 4 years, 16 years and 64 years, that would reduce the difference between consecutive τ to a factor of 2, which might be better. Given that my remaining degrees of freedom is over 600, reducing degrees of freedom by 4 would not be a big deal. Of course, this would make the individual estimates of the magnitudes of each exponential decay function less significant. But since I am interested primarily in the resulting impulse response function (and all the exponential response functions are strongly correlated with each other), this shouldn't result in any significant increase in uncertainty about the estimated impulse response function.

Consider a typical cyclical climate phenomenon that has a 20 year period... you will never get a good fit through said cycle by summing exponentials of a few different time constants (0.5 years, 2 years, 8 years, 32 years), no matter what, because that set of exponentials does not fill the space you are working in. And, climate data has a whole range of cycles over varying timescales.

I am not trying to use the exponential response functions to describe cyclical variability. I am directly using indices such as LOD, AAM, SOI, PDO and NAO to account for cyclical variability. The exponential response functions are supposed to describe how the climate moves towards a new equilibrium given a change in radiative forcing (be it more greenhouse gases, more solar irradiance, or more volcanic aerosols).

The set of functions you should probably start from are the trigonometrics (sin and cos) which will nicely describe periodic phenomena and the hyperbolics (sinh and cosh) which will nicely describe decay and runaway phenomena. And the periods/time constants you should use would be to start with 1 year and fill the entire function space: so all integer values of years from 1 to n, where n is the longest timescale you want to consider. This of course gives you too many parameters to fit, but at least there would be some mathematical basis to expect a result that isn't nonsense.

I agree that sinusoidal functions would be a good way to represent cyclical climate variability. However, if I already have data on factors associated with climate variability (LOD, AAM, SOI, PDO and NAO), then why not use them directly? As an aside, I think that events like El-Nino are too chaotic to describe with sinusoidal functions, and that one might need to use Mathieu functions to represent them (if I were going to do that).

Hyperbolics, would describe runaway phenomena, but the problem is that runaway phenomena are completely unrealistic for the Earth's climate response. Runaway global warming for Earth is basically impossible until the Earth reaches a temperature of 647 K. http://climatephys.org/2012/07/31/the-water-vapor-feedback-and-runaway-greenhouse/

Link to comment
Share on other sites

Also, I thought of another possible explanation for my nonsense results. The change in temperature is my dependent variable. Even if I do model the change in temperature very well, once I integrate to get temperature, the resulting temperature will be subject to 'random walks' away from the true temperature. That is the case if the assumptions about the error terms in the regression model are correct (independent, no heteroskedasticity, etc.). In reality, the behaviour of the error varies over time (example: the quality of the data in the 60's is not as good as the quality of the data in the 90's). When I take the residual of my results and I integrate, it is clear that what I get is subject to this random walk behaviour, which can explain my nonsense results.

Maybe I should just integrate my current regression equation to get a new regression equation that has temperature as the dependent variable rather than change in temperature?

Another thing I noticed is that rather than doing the Gauss-Newton method, I could also estimate the non-linear model by performing a sequence of linear regressions where I alternate between holding the impulse response function and the relative strength of different types of radiative forcings constant. This might give more robust results that is less dependent on an initial guess.

Edited by -1=e^ipi
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...