Quantcast
Channel: Climate Audit » multivariate
Viewing all articles
Browse latest Browse all 10

Back from Georgia Tech

$
0
0

First, let me thank me thank Judy Curry for inviting me to make a presentation at their seminar series and for both spending so much time and energy showing me around the department and hosting me so hospitably. I was the guest at many interesting presentations by able young scientists and at splendid lunches and dinners on Thursday and Friday. I also wish to thank Julien Emile-Geay for his role in initiating the invitation.

Readers of this blog should realize that Judy Curry has been (undeservedly) criticized within the climate science community for inviting me to Georgia Tech. Given that the relatively dry nature of my formal interests and presentation (linear algebra, statistics, tree rings etc.) and that I’ve been invited to present by a panel of the National Academy of Sciences, it seems strange that such a presentation to scientists should provoke controversy, but it did. Readers here should recognize the existence of such a controversy before making ungracious remarks to my hosts. I must say that I was disappointed by many comments on the thread in which I announced that I was going to Georgia Tech (many of which broke blog rules during a period that I was either too busy or too tired to moderate and have now been deleted.)

For critics of the invitation, I wish to assure them that neither Julien (nor Judy) ever explicitly or implicitly agreed with anything that I said and that I do not interpret a failure to rebut any particular point or claim as acquiescence. Quite the opposite. However, any climate scientists who stridently criticized Judy Curry for the invitation should also consider the possibility that she was one chess move ahead of them in what she was trying to do and how my visit was organized.

Right now I have two related but functionally distinct “hats” in the climate debate.

One role is that of conventional (or in my case, slightly unconventional) scientific author, with a few articles and conference presentations on millennial reconstructions. This role is, of course, made livelier both by my unconventional route to writing these articles and by the interesting events that followed them, not least of which was consideration by the NAS and Wegman panels and an appearance before a House of Representatives subcommittee.

The other role is that of proprietor of a climate blog with a big, lively and vociferous audience, arguably a distinct role by now. The emergence of blogs is a media phenomenon in itself, but, in the climate community, blogs are uniquely active. (This is interesting in itself and deserves a little reflection.) Within that community and even within the larger blog community, Climate Audit has established both a noticeable presence and unique voice. I don’t want this post to turn into a reflection on Climate Audit (we can reflect on that on another occasion), but there was little doubt in my mind that scientists at Georgia Tech were far more familiar with Climate Audit than with MM 2005 (GRL) etc.

I’m pretty sure that Judy Curry perceived that: because so much of my personal exposure to climate scientists has been through the dross and bile of the Hockey Team, this has affected the representation and perception of third party climate scientists at a popular blog and it would be beneficial to the portrayal of climate scientists at this new media form for me to meet sane non-Hockey Team climate scientists doing valid and interesting work. I’m sure that other presenters to the EAS Friday afternoon seminar are also treated hospitably, but I suspect that most of them don’t get to spend two days meeting such a wide variety of Georgia Tech climate scientists in small meetings or that their meetings were quite like mine.

On Thursday, I spent most of the day seeing interesting and substantive work in areas unrelated to anything that I’d written about – things like establishing metrics for aerosols using Köhler Theory or laboratory procedures for speleothems. And whatever other criticisms people may have of me, I don’t think anyone has ever criticized me for not finding interest in details and methods. On Friday, I heard an extremely interesting exposition on the physical basis of hurricanes and their role in the overall balance of nature. An interesting context here (and one that I was previously unaware of) is Peter Webster’s interest in monsoons and Bangladesh.

On Thursday, I was also guest at a seminar on climate and the media (including blogs); on Friday early afternoon prior to my EAS seminar, there was a short Q and A session with the Hockey Stick class. At 3.30 Friday afternoon, I presented to the EAS seminar. I didn’t count the crowd, but it looked like there were about 100 people there, including a couple of (non-GA Tech) CA readers from Atlanta. There was a short question period after the presentation and then a beer-and-wine reception.

Readers who were worried about protests and fireworks at the EAS presentation can disabuse themselves of such fevered imaginations. On the one hand, the audience was polite. On the other hand, it would be hard for a student or uninvolved faculty to think up a technical question that hasn’t been raised previously. So there were no fireworks at the seminar, or for that matter, about the Hockey Stick on any occasion. I’ll review the questions below, but I really wasn’t asked very much at any of the public sessions about statistics or proxies. I’m not going to report or discuss any one-to-one sessions since the line between private scientist and blog reporter was not clearly discussed at the meetings; I am therefore treating them as private, even if they were scheduled on-campus meetings – other than to say that there was relatively little specific discussion of the statistics and proxy issues that directly concern me. Not that there wasn’t much lively discussion – just not about partial least squares, spurious regression, bristlecones, data mining, etc. If any of the parties wishes to put any views on such matters on the record here (or elsewhere), they are welcome to do so. Below I’ll limit my discussion to matters raised at the public seminar or in a classroom setting.

Not everything was sweetness and light. There were a couple of rough patches, not about my analysis of MBH or proxies, but about some incidents here at climateaudit. I’ll discuss blog manners and perceptions on another occasion and mention only one point right now. I regularly discourage people from being angry in their posts for a couple of reasons – even if you feel that the angry outburst is justified, it never convinces anyone of anything; and it gives people an excuse to ignore non-angry posts. Regular readers tend to filter out the angry posts and pay attention to the more substantive posts. However consider the possibility that visitors have the reverse filter – they tend to pay attention to the angry posts and ignore the substantive ones. As people know, I’ve modified my attitudes towards comments over time and now try to delete angry posts when I notice them (and these angry posts are 99% of the time condemning climate scientists and the horse that they rode in on, rather than this blog). It places an unreasonable burden on me to weed out these angry posts and I re-iterate one more time my request that readers refrain from making angry posts as they are entirely counter-productive.

After that long preamble, I’ll review my presentation to the EAS seminar (which I’ve now put online) and questions arising at the seminar or in the classroom.

EAS Seminar Presentation
I haven’t made a long presentation in nearly 18 months and I’ve only made a couple altogether. I actually haven’t spent a lot of time on HS matters during the past year and I used my preparation as an opportunity to pull together some lines of thought that I’ve presented from time to time on the blog, but not previously pulled together.

In addition to the usual things that a speaker deals with, in my case, it was necessary to provide a little personal history, something that most speakers can skip. (I could give a pretty lively talk, consisting only of such stories.) While it’s an interesting segment, it quickly eats into time allotments. Combined with the fact that there are more things that I want to say than I can practically cover and that I haven’t had previous audience feedback on what works and what doesn’t work, I tend to end up being rushed. (By contrast, Ross McKitrick has a nice easy way about him when he does this sort of presentation.) There were some style defects in the PPT (remedied in the online version) e.g. some missing y-axis labels; for short form citations, not just Yule 1926 but Yule 1926 (J Roy Stat Soc).

I also prefaced my talk with a few disclaimers e.g. that I did not argue that anything in the talk disproved global warming; that, if I had a big policy job, in my capacity as an office holder, I would be guided by the reports of institutions such as IPCC rather than any personal views (a point I’ve made on a number of occasions); and that I believed that policy decisions could be made without requiring “statistical significance” (such decisions are made in business all the time, and, in all my years in business, I never heard the words “statistical significance” pass anyone’s lips as a preamble to a business decision.

I then attempted to place temperature reconstruction recipes in a broader statistical context – first by showing, in relation to MBH, that all the MBH operations were linear; and that the MBH reconstruction (like other reconstructions) was thus a linear combination of underlying proxies. I showed a graphic (previously shown at CA, like most of the material) in which AD1400 MBH weights were represented by dot area on a world map, showing the tremendous influence of bristlecones. I posited that it should be possible to calculate weights for the RegEm of Mann et al 2007 and that its weights would look pretty similar to MBH weights – with a very high bristlecone weighting. I noted, but did not dwell on, the curious PC error in MBH98. While this particular MBH error has attracted much attention, it is only one of a number of problems and I spent 99% of my time on issues that had nothing to do with principal components.

I showed that, for the one-PC case of the MBH AD1400 and AD1000 steps, the proxies were correlation-weighted; that correlation weighting was equivalent to a technique actually known in the broader statistical community (one-step Partial Least Squares regression); that PLS coefficients were a rotation of OLS coefficients; that, in a situation (such as MBH) where there was little multicollinearity between the proxies, the rotation matrix was “near”-orthogonal. Given that it’s trivially easy to picture overfitting from a multiple “inverse” OLS regression of temperature (or temperature PC1) onto 65-90 non-collinear proxies in a period of only 79 years, it therefore follows (from the near-orthogonality of the rotation) that overfitting will occur in a PLS regression where there is little multicollinearity in the underlying proxies. In such cases, of course, you’re going to get a good fit in the calibration period, but confidence intervals calculated from such calibration residuals have no scientific meaning – a simple point that seems to have eluded far too many. I argued that the “no-PC” MBH98 variant that Wahl and Ammann put forward in an effort to salvage MBH falls prey to these overfitting problems (among others) and merely goes from the frying pan into the fire.

I referred to Stone and Brooks 1990, which showed that there was a one-parameter “continuum” between PLS coefficients and OLS coefficients via ridge regression, showing a slightly different, but equivalent, one-parameter mixing. (I skipped over a diagram showing another interesting arrangement methods derived from an approach of Magnus Borga). Because ridge regression is in a sense “intermediate” between OLS and PLS, overfitting problems that plague both OLS and PLS (such as the overfitting problem discussed above) would also affect ridge regression.

I showed a graphic from my 2006 CA post on VZ pseudoproxies showing what happened to the coefficients in an overfit network of very “tame” pseudoproxies. I’m convinced that this diagram is the most cogent explanation of the loss of low-frequency variance which was at the root of earlier (still unresolved) controversy. As an online editorial aside, the coefficients from ridge regression would gradually go from PLS-coefficients to OLS coefficients along a one-parameter path in coefficient space. Their ability to preserve low-frequency variance will therefore be intermediate between PLS and OLS and deteriorate as they approach OLS, something that one would expect to affect Rutherford et al 2005, which combined ridge regression with RegEM. (As an online editorial comment, Smerdon et al observed that, properly calculated, there was a substantial low-frequency variance loss in Rutherford et al 2005, which one might well expect from the above diagnosis.)

I didn’t comment on truncated total least squares as proposed in Mann et al 2007. It’s never a whole lot of fun wading through Mannian methodology, but, now that I’ve picked the file up again, I’m going to spend a little time in the next few weeks trying to work through this method and try to figure out what it does in the MBH98 proxy network recycled in Mann et al 2007. As another online editorial comment, it looks to me like there’s need to disentangle the relative impacts of changing from (1) partial least squares (MBH) to ridge regression (R05) to truncated total least squares (M07); (2) temperature principal components to gridcell matrices; (3) stepwise splicing to EM (Reg or otherwise). On another occasion, Tapio Schneider confirmed my surmise that, at the end of the day, RegEM would yield coefficients, although the coefficients in Mann et al 2007 were not reported. Because the Mann et al 2007 recon looks so much like the MBH98 reconstruction, I surmise that bristlecones and Gaspe are heavily weighted in the early segments as they were in MBH98, but this surmise needs to be demonstrated.

My only objective in discussing the linear algebra was to de-mystify the recon process by showing that the recon methods (including MBH) could be fit into a statistical framework. I didn’t expect that people would necessarily accept this merely by flashing a few PPT slides; my objective was merely to put this on the table, so that third party scientists might at least draw a breath whenever they heard phraseology like “overdetermined relationship between the optimal weights” and to be cautious in relying on results from some novel and poorly understood statistical methodology.

In my opinion, it’s long past time to move away from such esoterica as “overdetermined relationship between the optimal weights” and strained signal processing metaphors in general and time to re-formulate the proxy debate in the form of standard statistical questions e.g. is there a valid linear relationship between bristlecone ring widths and temperature such that this can actually be used to estimate past temperatures? If I do another presentation along these lines, I’ll try to express this even more forcefully.

Even for seemingly simple statistical questions like a relationship between temperature (or temperature PC1) and bristlecone ring widths – I should have mentioned “teleconnections” here – I tried to show the audience that these could not always be easily resolved on purely statistical grounds (e.g. using simple statistics such as correlation or RE.) While we touched on this topic in MM2005 (GRL), where “spurious significance” is used in the title, and included some good references there, I’m now in a position to frame the issues more precisely. In my PPT, I mentioned Yule 1926; Keynes 1940; Granger and Newbold 1974; Hendry 1980; Phillips 1986, 1998; Ferson et al 2003; Greene 2000 – all of which have been previously discussed at CA (most of which I’ve placed online as well). This is a literature that was unfamiliar to audience, although the autocorrelation problems that plague proxy studies have also been faced in econometrics – which is a small branch of statistics, but one which may well have tools that transport better into the proxy world.

Briefly, econometricians have pondered for many years why important and widely used statistics (correlation, t-statistics) can be “significant” and even “strongly significant” for “nonsense” (or “spurious”) relationships. Sometimes a “nonsense” relationship can have a Pearson correlation (r) that is “99.999% significant” (in the strange sense of Juckes et al 2007) , such as the examples of mortality versus proportion of Church of England marriages (Yule 1926) or cumulative rainfall versus inflation (Hendry 1980). Is there any statistical way – i.e. some quantitative calculation – by which “spurious”/”nonsense” relationships can be culled from valid relationships?

The Juckes et al 2007 approach obviously accomplishes nothing in this respect. It is understood that nonsense relationships can have very high r values.

A statistic to which the multiproxy community has seemingly attached strong magical properties in this respect is the RE test, which, in the hands of Mann and his associates, becomes virtually a talisman. However, the RE test has limited “power” (using the word as used by statisticians) to reject nonsense regressions; I observed that the RE test is unable to detect either the Yule 1926 or Hendry 1980 nonsense relationships, both passing the RE test with flying colors with standard splits of calibration and verification periods. n passing, as another online comment, the RE statistic is mentioned under another name in econometric literature in the 1970s prior to its adoption by dendros (Theil mentions the test – see Granger and Newbold 1973, not the more famous 1974), but it’s never really caught on in econometrics for some reason.

Another statistic that has been proposed for the identification of nonsense regression is the Durbin-Watson test (which quantifies first-order autocorrelation in residuals). Granger and Newbold 1974 argued that this could be used to distinguish one form of nonsense regression – between random walks, where impressive correlation and t-statistics frequently occurred, but which were found to fail the Durbin-Watson test. Phillips 1986 explained this phenomenon in a remarkable and seminal article.

Other econometric tests have been proposed. Nonsense regressions between random walks are hardly the only way in which spurious significance can rear its ugly head, but it’s a model that is tractable mathematically and has yielded insight into at least one class of problems.

At this point, it’s fair to say that there is no talisman that can be relied upon to separate “nonsense” from valid relationships (definitely not the RE statistic). Passing any individual statistical test does not guarantee that a relationship is not spurious, but failing any test (including verification r2) should raise red flags all over the place – see the NAS panel report for a specific comment on the impact of such failures on the ability to calculate confidence intervals). As I observed at AGU in 2006 and repeated in this talk, virtually all the canonical multiproxy reconstructions fail the Durbin-Watson test and verification r2 test, something that would raise alarm bells for any reader familiar with econometric literature.

As another online comment, I note that it’s not that climate scientists are inattentive to the phenomenon of spurious correlation – any attempt to link temperature to indices of solar activity typically instigates prompt statistical investigation by climate scientists. One Georgia Tech scientist criticized me for not applying myself to this particular topic, arguing, as others have, that my failure to do so made the blog one-sided. My response to him, as to others, was that it’s impossible for me to do everything, that I’m already overcommitted and that my priority is to deal with mainstream papers that are relied on by IPCC and that, other than our work, no comparable effort seems to have been made on the canonical multiproxy reconstructions and their key components such as bristlecone ring widths. Having said that, I said at that meeting (and again here) that it’s the sort of analysis that appears within my scope and I’ll try to organize the data and analysis on some future occasion.

Continuing on with spurious correlations, in econometrics literature, it has also been observed: (1) where data mining has taken place (either in an individual study or cumulatively in a discipline), the risks of spurious correlation increase, and (2) these risks are exacerbated when series are highly autocorrelated (as with series in the recons). A particular problem for the canonical multiproxy studies is that many of the multiproxy studies said to be “independent” actually use many of the same proxies over and over (bristlecones, Tornetrask, Polar Urals) so that problems affecting a repetitively-used proxies (e.g. spurious correlation) will affect multiple multiproxy studies – a point that I illustrated in a slide.

Greene (2000) observed that standard statistical distributions ceased to apply when a data set had been subject to prior mining or snooping. Since studies like Osborn and Briffa 2006, Hegerl et al 2007, Juckes et al 2007 overtly re-cycle data, any purported results from these studies are compromised by the inapplicability of standard distributions to data mined networks.

Greene (2000) offered an interesting suggestion on a way to work around data mining concerns, which also deals in a straightforward way with concerns about whether bristlecone ring widths are a reliable proxy (through teleconnection) for world temperature. Greene observed that one effective way of check econometric relationships for which data mining was a concern, was, for data sets ending in 1980, simply to wait 30 years, update the data and see if the proposed relationship still held up. By sheer coincidence, the date used in Greene’s example, 1980 , is the termination date of some of the key multiproxy reconstructions – which makes the segue especially apt. One of the most obvious questions for me when I first encountered proxy reconstructions is why authors had not updated the standard proxy series into the 2000s to verify that they responded to the warm 1990s and 2000s. This issue was very much on Kurt Cuffey’s mind in the oral NAS panel hearings, although they didn’t face it up to it very squarely in the written report.

As opposed to arguing back and forth about whether a relationship between bristlecone ring widths to temperature in the period leading up to 1980 could be projected to apply to warm periods, why not simply update the records and find out? In econometrics, such an update would be viewed as a test of the model – if the relationship failed to hold up, then the model (e.g. that there is a linear relationship between temperature and bristlecone ring widths or PC1s or whatever) would be rejected. And, as readers of CA well know, looming over such a discussion is the “Divergence Problem”.

I did a very quick survey of recent work at 4 sites of particular interest: 1) our own update to 2007 of the Graybill Almagre bristlecones; 2) the Ababneh (2006, 2007) update of the Graybill Sheep Mt bristlecones (the most important site in MBH and Mann and Jones 2003); 3) the Grudd (2006, 2008) update of Tornetrask; and 4) the unreported Esper/Schweingruber update of Polar Urals.

There are, in effect, not one but two Divergence Problems (I didn’t make that distinction in my talk, but it’s logical).

The First Divergence Problem is the decline in ring widths (and MXD) in the 2nd half of the 20th century despite rising temperatures. For an econometrician, this “divergence” would be viewed as a contradiction of the hypothesis that RW (or MXD) can be used in linear temperature reconstructions and evidence that many supposed relationships were spurious or data mined. The Divergence issue was mentioned – but ineffectively handled – by both the NAS Panel and IPCC AR4. Both argued that the Divergence Problem was limited to high latitudes – IPCC making a more extreme and less accurate statement than the NAS Panel. However, results at Almagre and at Sheep Mountain (and elsewhere e.g. Woodhouse’s limber pines) provide evidence that divergence is not limited to high latitudes (see also the post on young dendros at AGU 2007).

In addition, there is a Second Divergence Problem – the “divergence” between recent chronology updates at Sheep Mt, Tornetrask and Polar Urals and earlier chronologies used in the canonical multiproxy studies. In each case, the more recent chronologies have resulted in substantial increases of medieval relative to modern values, with, in some cases, medieval proxy values outstripping modern values. I observed (as I have on many occasions here) that canonical multiproxy studies are typically not robust even to the version selected for key proxies – for example, the use of Polar Urals Update instead of Yamal reverses the medieval-modern relationship in Briffa 2000; the use of the Ababneh Sheep Mt version eliminates the HS in the Mann and Jones 2003 PC1, etc etc Many such variations have been reported at CA.

The substantial changes from one version to another in these important series should, in my opinion, be very troubling to third party scientists. Until there is a thorough and comprehensive reconciliation of different chronologies – Ababneh versus Graybill, Grudd versus Briffa, Polar Urals Update versus Yamal – I don’t see how any version of these chronologies can be used in a reconstruction placing multiproxy authors in an untenable position and making advances in the field extremely difficult and perhaps impossible.

As an online editorial comment, I don’t say this to mean that people should throw their hands up in the air. It seems to me that there should be some way of making climate reconstructions. But at present, third party scientists presented with reconstructions that purport to establish temperature in AD1000 to within 0.2 deg C or even 0.5 deg C should take such calculations with a grain of salt until the various “divergence” problems are resolved.

I’ve posted up a ppt (9 MB) of my presentation here, very slightly edited to add any y-axis descriptions (in italics) that had been left out and to improve the referencing.

Questions and Criticisms

I really didn’t get much, if any, feedback on any statistical or proxy interpretations either in the public forum at the presentation or in smaller groups or in private. Mostly people asked about “big picture” questions or matters that were not raised in my presentation. I don’t interpret that as meaning that people necessarily accepted my views on these matters, only that the audience had their own specialties and that my topics were sufficiently technical that it was hard for someone unfamiliar with the detailed terrain to really find a foothold.

One person asked me about borehole reconstructions, which are not really material to my analysis. The borehole recon in the IPCC AR4 spaghetti graph was referred to, but I observed that it does not go back to the MWP and so doesn’t really shed much light on the medieval-modern issue. A long borehole recon – the Dahl-Jensen borehole recon has a very elevated MWP in Greenland. I’m not sold on borehole recons and said so, but it’s not a topic that I’ve evaluated in depth. Thus this discussion was really just chitchat.

A second person asked me about (aboriginal) oral traditions in the high Arctic. I’m personally doubtful that any aboriginal oral traditions would really shed much light on things for a variety of reasons (and the existence of Viking traditions would also have to be weighed in the balance). Again nothing to do with my presentation and the discussion while interesting was just chitchat.

One person asked me about the supposed “Inquisition” in which Mann’s financial records and bank accounts had been sub-poenaed. I replied that Mann’s financial records had not been subpoenaed; he had been asked to list federal and private financial support for his work. It was my understanding at the time that such questions were relatively pro forma for the committee. I had to provide information about federal support when I testified – which took me only one line to answer. Judy Curry volunteered that she had to provide similar information as well when she testified before a different committee.

I don’t remember much about what the hockey stick class asked me. I do recall that my assertion regarding the dependence of the MBH98 hockey stick on bristlecones was challenged but I don’t recall whether the questioner provided any reasoning for the challenge. A graphic in my PPT (the one that shows the contributions of bristlecones relative to the almost white noise of other proxy classes) would have been useful in answering this question, but this question was asked prior to my EAS presentation.

Other than that, I’m drawing a blank in terms of remembering any questions about statistics, regression, principal components, bristlecones, tree ring chronologies, verification r2, multivariate methodologies. Maybe someone will remind me and I’ll amend this note accordingly.

I was asked in one session about dealing with the media and how climate scientists could better get their message across. It was interesting to chat about this, but it’s not something about which I claim particular knowledge or expertise.

My emphasis on archiving data was endorsed at the presentation and my highlighting of problems in the field was acknowledged as a contribution. I had made a point of noting both privately and in my public presentation that the speleothem data for Partin, Cobb et al 2007 (both at Georgia Tech) was placed online at WDCP concurrent with journal publication – an excellent example of best practices. One scientist was surprised at archiving problems in paleoclimate as apparently NSF Ocean and Polar programs have very strict compliance enforcement – data had to be archived prior to application for the next grant. So it seems perhaps that it might be worthwhile to spend less attention on the general principles of archiving and more attention on ineffective administration at the NSF unit that deals with paleoclimate.

On a number of occasions, I was asked (in different ways) whether I endorsed IPCC findings. I’ve said on many occasions (including the preamble to my talk at Georgia Tech), that, if I had a senior policy making job, I would be guided by the views of major scientific institutions like IPCC and that, in such a capacity, I would not be influenced by any personal views that I might hold on any scientific issue. Many people seemed to want me to make a stronger statement, but I’m unwilling to do so. In the area that I know best – millennial climate reconstructions – I do not believe that IPCC AR4 represents a balanced or even correct exposition of the present state of knowledge. I don’t extrapolate from this to the conclusion that other areas are plagued by similar problems.

My presentation could undoubtedly have been improved (and many such improvements would occur even to me on a 2nd occasion). One person who had privately given me a particularly hard time about climateaudit came up to me at the reception and said that he found the presentation “compelling”; another said that he followed the linear algebra and complimented me on the approach. I’m sure that there were some, perhaps even many, who didn’t like my presentation, but were too polite to tell me.

I presume that most people in the audience have derived their perspective on these disputes from realclimate, from which they would believe that we had made elementary and even stupid errors and that any minor points on which we had been accidentally correct didn’t “matter”. I would like to think that such a person, even if they were not overwhelmed by my argument or presentation, would have leave with the impression that I was not merely trifling, that I had certainly not got everything completely “wrong” in a trivial way and that they should not necessarily accept Hockey Team assertions as the last word on the topic.

There are some other issues that I plan to re-visit on another occasion, not least of which will be posting rules for Climate Audit. Again, I appreciated both the invitation and the hospitality.



Viewing all articles
Browse latest Browse all 10

Trending Articles