Friday, April 18, 2014

[jules' pics] Sheep

Sheep being a signpost
Hornby Road


Sheep having a special feast
Hornby Road


Sheep laughing at the stupid mountain tandemmers
Hornby Road


Sheep being Zen rocks
Kentmere sheep rocks


These sheep look a bit funny to me. Lots of them appeared in the fields just this week. Maybe they are a special Easter variety.
Mooo


--
Posted By Blogger to jules' pics at 4/18/2014 05:43:00 PM

Coverage

Or, why Nic Lewis is wrong.

Long time no post, but I've been thinking recently about climate sensitivity (about which more soon) and was provoked into writing something by this post, in which Nic Lewis sings the praises of so-called "objective Bayesian" methods.

Firstly, I'd like to acknowledge that Nic has made a significant contribution to research on climate sensitivity, both through identifying a number of errors in the work of others (eg here, here and most recently here) and through his own contributions in the literature and elsewhere. Nevertheless, I think that what he writes about so-called "objective" priors and Bayesian methods is deeply misleading. No prior can encapsulate no knowledge, and underneath the use of these bold claims there is always a much more mealy-mouthed explanation in terms of a prior having "minimal" influence, and then you need to have a look at what "minimal" really means, and so on. Well, such a prior may or may not be a good thing, but it is certainly not what I understand "no information" to mean. I suggest that "automatic" is a less emotive term than "objective" and would be less likely to mislead people as to what is really going on. Nic is suggesting ways of automatically choosing a prior, which may or may not have useful properties.
[As a somewhat unrelated aside, it seems strange to me that the authors of the corrigendum here concerning a detail of the method, do not also correct their erroneous claims concerning "ignorant" priors. It's one thing to let errors lie in earlier work - no-one goes back and corrects minor details routinely - but it is unfortunate that when actually writing a correction about something they state does not substantially affect their results, they didn't take the opportunity to also correct a horrible error that has seriously mislead much of the climate science community and which continues to undermine much work in this area. I'm left with the uncomfortable conclusion that they still don't accept that this aspect of the work was actually in error, despite my paper which they are apparently trying to ignore rather than respond to. But I'm digressing.]

All this stuff about "objective priors" is just rhetoric - the term simply does not mean what a lay-person might expect (including a climate scientist not well-versed in statistical methodology). The posterior P(S|O) is equal to to the (normalised) product of prior and likelihood - it makes no more sense to speak of a prior not influencing the posterior, as it does to talk of the width of a rectangle not influencing its area (= width x height). Attempts to get round this by then footnoting a vaguer "minimal effect, relative to the data" are just shifting the pea around under the thimble.

In his blog post, Nic also extolls the virtue of probabilistic coverage as a way of evaluating methods. This initially sounds very attractive - the idea being that your 95% intervals should include reality, 95% of the time (and similarly for other intervals). There is however a devil in the detail here, because such a probabilistic evaluation implies some sort of (infinitely) repeated sampling, and it's critical to consider what is being sampled, and how. If you consider only a perfect repetition in which both the unknown parameter(s) and the uncertain observational error(s) take precisely the same values, then any deterministic algorithm will return the same answer, so the coverage in this case will be either 100% or 0%! Instead of this, Nic considers repetition in which the parameter is fixed and the uncertain observations are repeated. Perfect coverage in this case sounds attractive, but it's trivial to think of examples where it is simply wrong, as I'll now present.

Let's assume Alice picks a parameter S (we'll consider her sampling distribution in a minute) and conceals it from Bob. Alice also samples an "error" e from the simple Gaussian N(0,1). Alice provides the sum O=S+e to Bob, who knows the sampling distribution for e. What should Bob infer about S? Frequentists have a simple answer that does not depend on any prior belief about S - their 95% confidence interval will be (S-2e,S+2e) (yes I'm approximating negligibly throughout the post). This has probabilistically perfect coverage if S is held fixed and e is repeatedly sampled. Note that even this approach, which basically every scientist and statistician in the world will agree is the correct answer to the situation as stated, does not have perfect coverage if instead e is held fixed and S is repeatedly sampled! In this case, coverage will be 100% or 0%, regardless of the sampling distribution of S. But never mind about that.

As for Bayesians, well they need a prior on S. One obvious choice is a uniform prior and this will basically give the same answer as the frequentist approach. But now let's consider the case that Alice picks S from the standard Normal N(0,1), and tells Bob that she is doing so. The frequentist interval still works here (i.e., ignoring this prior information about S), but Bayesian Bob can do "better", in the sense of generating a shorter interval. Using the prior N(0,1) - which I assert is the only prior anyone could reasonably use - his Bayesian posterior estimate for S is the Normal N(O/2,0.7), giving a 95% probability interval of (O/2-1.4,O/2+1.4). It is easy to see that for a fixed S, and repeated observational errors e, Bob will systematically shrink his central estimates towards the prior mean 0, relative to the true value of S. Let's say S=2, then (over a set of repeated observations) Bob's posterior estimates will be centred on 1 (since the mean of all the samples of e is 0) and far more than 5% of his 95% intervals (including the full 27% of cases where e is more negative than -0.6) will fail to include the true value of S. Conversely, if S=0, then far too many of Bob's 95% intervals will include S. In particular, all cases where e lies in (-2.8,2.8) - which is about 99.5% of them - will generate posteriors that include 0. So coverage - or probability matching, as Nic calls it - varies from far too generous, when S is close to 0, to far too rare, for extreme values of S.

I don't think that any rational Bayesian could possibly disagree with Bob's analysis here. I challenge Nic to present any other approach, based on "objective" priors or anything else, and defend it as a plausible alternative to the above. Or else, I hope he will accept that probability matching is simply not (always) a valid measure of performance. These Bayesian intervals are unambiguously and indisputably the correct answer in the situation as described, and yet they do not provide the correct coverage conditional on a fixed value for S

Just to be absolutely clear in summarising this - I believe Bayesian Bob is providing the only acceptable answer given the information as provided in this situation. No rational person could support a different belief about S, and therefore any alternative algorithm or answer is simply wrong. Bob's method does not provide matching probabilities, for a fixed S and repeated observations. Nothing in this paragraph is open to debate.

Therefore, I conclude that matching probabilities (in this sense, i.e. repeated sampling of obs for a fixed parameter) is not an appropriate test or desirable condition in general. There may be cases where it's a good thing, but this would have to be argued for explicitly.

Tuesday, April 08, 2014

BlueSkiesResearch.org.uk: Paleo-clouds


BlueSkiesResearch.org.uk


Posted: 07 Apr 2014 05:12 AM PDT
There is a weird divide between paleoclimate modellers and other model developers, that I have never understood. My impression is that the paleoclimate group at JAMSTEC was one of the larger and more integrated, but even there paleoclimate runs were always completed, without much interest from others, using the "spare CPU" left after the main CMIP runs were completed. However, in 2006, when we published a paper looking at different constraints on climate sensitivity, the Last Glacial Maximum seemed like it should provide a strong constraint on the high end. Since then I have always been of the opinion that the LGM should be a mandatory CMIP run. If a model has not been shown to reasonably reproduce a climate with a different atmospheric CO2 level (among other forcings), then of what use is it for projecting future climate change?

But no one in any position of power seems to share this opinion, and I was getting used to boring myself with always asking people to do paleoclimate runs if they wanted to increase confidence in their ensembles, or to check the importance of some correlation between present and future climate that they had found in the ensemble. For the first time in CMIP5, the paleoclimate runs were officially included (although as non-essential runs). Although not everyone has done the runs yet, at least this time they are being completed with the same model versions as those used for the other CMIP models. Because of this, I agreed, a year ago, at the last PMIP meeting, to start the Past to Future working group to aid work into using paleoclimate information to directly improve predictions. For this project to remain well focussed, I think it needs some input from the futurists (those who want to know about the future).

I was delighted, therefore, when I started getting some emails from CFMIP, the Cloudy Futurists MIP. I wrote some sufficiently stupid things in my replies to them, that I gained an invitation to the WCRP Grand Challenge workshop on Clouds, Circulation and Climate Sensitivity, previously blogged as the cloudy lock-in. There is certainly no doubt that finding out something useful about clouds from paleo is a very grand challenge (although Sandy Harrison did make a brave attempt at promoting a cloudiness proxy). This is because, compared to information gained today, paleoclimate information is much more sparse, more uncertain, and more indirect (i.e. you are usually measuring things like isotope ratios or amounts of pollen). Despite this gulf, and my complete ignorance about clouds or atmospheric circulation, it was the most inspiring meeting I have attended for several years. They came up with a focus for the Grand Challenge, which is changing patterns in future climate, with four sub-questions, related to particular changes. There also seems to be quite a strong impetus to increase connections between CFMIP and PMIP, which should be a thoroughly good thing. Maybe the barrier between the paleo and future scientists can be dissolved.

As the four questions were being presented on the last day I realised that I was not, actually, in a room of futurists, but of atmospheric physicists. Rather than asking questions about how the climate will change, the questions were written in terms of whether changes in X are important for climate change, where X is a thing of interest to atmospheric scientists (storm tracks, ITCZ, convective aggregation and convective mixing respectively). Using paleoclimate data to help answer these questions is what they call a "cross-cutting" theme, which means they hope it will help all of them. But this could be confusing to people who know about paleoclimate, because, although paleoclimate climate changes may be partly caused by clouds and circulation changes, they are much more likely to be whole "earth system" phenomena (including vegetation feedbacks, ocean circulation and carbon cycle changes, sea-level changes etc.). So, now I am thinking that it is me and James who are in fact the futurists, who actually want to use all the information we have available to predict future climate. I hope we are not the only ones!

I do not think I am very good at taking photos of people. I think this is partly because I am a bit strange and so people tend to look anguished when they look at me.  However, this isn’t the pickturs blog, and some people seemed to be begging to be blogged… so here goes…

A breakout group – my least favourite thing at meetings

Working in splendid surroundings.
DSC_1051-4
Official fun was a walk to the lake during which we were supposed to discuss science

Official fun – walk to lake – many took the instruction to talk about science very seriously – all the real plans were hatched that afternoon.

Collaboration continued well into the night

Isaac trying to make it on to the blog.

last breakout group, al fresco – it was actually too sunny!

caffeine fueled – with people from Japan, Australia and California there was plenty of jetlag to go round

getting a bit geeky over dinner

At last – a few moments of unofficial fun!

Thursday, April 03, 2014

[jules' pics] great spotted peanut-eater

woodpecker
Otherwise known as a greater spotted woodpecker... I suppose that she is probably sparrowhawk proof.

--
Posted By Blogger to jules' pics at 4/03/2014 08:24:00 PM

Wednesday, April 02, 2014

Another journal editor resigns!

Regular readers will have noticed that I follow the goings-on at EGU journals with some interest. So in that vein I'd like to point out there have been some recent changes at GMD. Perhaps most notably, our Dear Leader Dan Lunt has stepped down from the position of Chief Executive Editor, which he has held since the journal's inception about 6 years ago. Jules is the incoming chief. (Chief doesn't actually have any extra powers that I'm aware of, but is expected and trusted to take the lead on many decisions with or without discussion.) Bob Marsh has been added to the list of execs - this happened last year actually - having been a topical editor for some time. And...drum roll...I am no longer on the list of execs, though I'll remain a topical editor. All the execs feel that the journal (indeed all EGU journals) should be regarded as community assets rather than personal fiefdoms. So although it made sense to stick with a core team who shared a clear vision though the early years, we realised some time ago that it was time to bring in new ideas and let things evolve a bit. This feeling has been informally formalised though a rough plan to swap execs off the board on a biannual biennial every two year basis - Bob's induction was the start of this, staggered with my resignation to allow a bit of settling in time - and also rotate the chief exec position among the board members. I'm happy to leave the journal in the capable hands of the new board.

Incidentally, it is rumoured that the new Impact Factor for the journal will be approaching 6, up from 5 last year. That should put us even closer to the top of the list for journals in the geosciences! I'm sure that GMD, and all the other EGU journals, will continue to go from strength to strength as the open access movement continues to gain momentum.

Monday, March 31, 2014

Bunny bunny bunny murder mystery

There are bunnies everywhere. Many are alive. But quite a lot of them dead from myxy. Others are dead from cars. But today we found a mystery one, just inside the front door of a house we viewed.
It wasn't a ferret what done it, but thanks to the Stoat for alerting me to the excellent bad estate agents photos blog. I suppose my photo does not count as it was not taken by an estate agent, but the dead bunny at the front door approach it is surely an excellent way of making sure the wrong sort of people do not buy the property. 

We suspect it died of thirst after getting trapped inside the empty house. Does that mean the house has holes in it large enough to let in a bunny?! Should we go and live there so we can find out? Might that be fun...?

Tuesday, March 25, 2014

[jules' pics] Feeding the birds

We have a bird feeder each, set up in trees outside our "office" windows. James' healthy seed mix was quite popular, until I got ... a peanut feeder. It seems that like Westerners, birds prefer fatty food to healthy food. This is a coal tit getting its beak full.
coal tit
But today we had our perceptions realigned, when we realised that the peanuts are not the food after all. This is a Sparrowhawk, trying to look nonchalant after failing to catch a long tailed tit on the peanut feeder.
sparrowhawk


--
Posted By Blogger to jules' pics at 3/23/2014 07:30:00 PM

BlueSkiesResearch.org.uk: Cloudy lock-in


BlueSkiesResearch.org.uk


Posted: 23 Mar 2014 03:20 PM PDT
Some scientific meetings, like the EGU or AGU, are attended by thousands and open to anyone prepared to pay the registration fee. Then there are the other sort of meeting; workshops. My fertile imagination likes to call them, “lock-ins”, as it sometimes seems like the plan is to get 30 or so scientists together, lock them in a castle, and only unlock the door when they have achieved something.
In this case, my imagination is closer than usual to being literally true.
Here’s the castle,
20140323-225844.jpg
My bedroom is on the 3rd floor of this tower!

20140323-230031.jpg
It is also snowing, which is quite fitting as it not only adds to the feeling of isolation (some attendees arrived un-Germanly late after braving the elements) but all this cloud fits in very well with the theme of the workshop – clouds, circulation and climate sensitivity.

[jules' pics] Scottish flowers

Unlike England, Scotland has flowers other than snowdrops!! And they are colourful!
camellia!!
camelia
gorse
gorse


Yes - that top one really is camellia!! Here is a close up to prove it. Two months too late, but still...
camelia


--
Posted By Blogger to jules' pics at 3/22/2014 02:58:00 PM

Saturday, March 22, 2014

Sawyer's "remarkably accurate" forecast

Spotted this in the Guardian, not sure what provoked it, but never mind. The claim is that a paper published in 1972 made a remarkably accurate prediction of 0.6C, and this proves we've understood the climate system really well for a long time.

Well.

For starters, it should be pretty obvious that if people published enough random papers, some would (with hindsight) turn out to be close to correct. That in itself would hardly prove prescience, although it might be hard to refute a claim by the lucky one. See also investment analysts who claim to be able to "beat the market"...

But I'm not going to go looking for all the silly forecasts that were wide of the mark, which there surely were - people are still churning them out, remember Lovelock's few remaining breeding pairs of humans, or Bryden's AMOC shutdown, or Keenlyside's cooling? - but merely evaluate the Sawyer paper on its merits. Found courtesy of wmconnolley.org.uk the paper doesn't really seem to have much original research, but repackages other work in what looks more like a commentary. He uses Manabe and Wetherald's climate sensitivity estimate of 2.4C, and a predicted increase in CO2 of 25%, to get a warming of 0.25*2.4 = 0.6C by the end of the (20th) century.

While the final number ended up pretty close, there are a number of assumptions/approximations/errors (take your pick) in that calculation. Firstly, there are other forcings! The IPCC AR5 lists other factors which in total magnitude exceed the CO2 effect, though the positives and negatives broadly cancel. But Sawyer doesn't consider them at all. Secondly, the logarithmic effect of CO2 means that a 25% increase should equate to a 32% of the effect of a doubling, which would work out at 0.8C...not a huge difference in forecast, but a big difference in level of understanding! Lastly (perhaps) there is also the small issue of equilibrium versus transient response - the thermal inertia of the ocean means there's a chunk more warming in the pipeline, probably about a third as much again. All these values have substantial uncertainty even now, of course - and although I'd say the the 2.4C sensitivity value still looks pretty good, others disagree and at best it was a lucky guess to get it right back then.

So all in all, it looks like he made a number of significant errors which end up cancelling out, thus resulting in a forecast that hit the bullseye much more closely than can have been reasonably expected.