Back to precip, it seems the variability is too low. This points to a problem with the percentage anomaly routines. See earlier

escapades - will the Curse of Tim never be lifted?

A reminder. I started off using a 'conventional' calculation:

absgrid(ilon(i),ilat(i)) = nint(normals(i,imo) +

* anoms(ilon(i),ilat(i)) * normals(i,imo) / 100)

which is: V = N + AN/100

This was shown to be delivering unrealistic values, so I went back to anomdtb to see how the anomalies were contructed in the

first place, and found this:

DataA(XAYear,XMonth,XAStn) = nint(1000.0*((real(DataA(XAYear,XMonth,XAStn)) / &

real(NormMean(XMonth,XAStn)))-1.0))

which is: A = 1000((V/N)-1)

So, I reverse engineered that to get this: V = N(A+1000)/1000

And that is apparently also delivering incorrect values. Bwaaaahh!!

Modified anomdtb to dump the precip anomaly calculation. It seems to be working with raw values, eg:

DataA: 1050, NormMean: 712.00, Anom: 475

DataA: 270, NormMean: 712.00, Anom: -621

DataA: 710, NormMean: 712.00, Anom: -3

DataA: 430, NormMean: 712.00, Anom: -396

DataA: 280, NormMean: 712.00, Anom: -607

DataA: 830, NormMean: 712.00, Anom: 166

DataA: 0, NormMean: 712.00, Anom: -1000

DataA: 270, NormMean: 712.00, Anom: -621

DataA: 280, NormMean: 712.00, Anom: -607

DataA: 450, NormMean: 712.00, Anom: -368

DataA: 180, NormMean: 712.00, Anom: -747

DataA: 1380, NormMean: 712.00, Anom: 938

However, that -1000 is interesting for zero precip. It looks as though the anomalies are in mm*10, like

the precip database raw values.. but no! These are dumped from within the program, the output .txt files

have -100 instead of -1000. That's because of the CheckVariSuffix routine, which returns a factor based

on the parameter code, including:

else if (Suffix.EQ.".pre") then

Variable="precipitation (mm)"

Factor = 0.1

Factor is then used when the anomaly files are written:

do XAStn = 1, NAStn

if (DataA(XAYear,XMonth,XAStn).NE.-9999) write (9,"(2f8.2,f8.1,f13.5,i7)"), &

ALat(XAStn),ALon(XAStn),AElv(XAStn),real(DataA(XAYear,XMonth,XAStn))*Factor,AStn(XAStn)

end do

So, the anomalies are in real units (presumably mm/day).

So.. we grid these values. The resulting anomalies (for Jan 1980) look like this:

max = 1612.4

min = -100

These should be applied to the climatology (normals). I think they can be applied to either unscaled 'real'

value normals or to normals which are mm*10. Results will be scaled accordingly. So.. let's look at glo2abs.

Again.

The current formula to convert to real values is:

absgrid(ilon(i),ilat(i)) = nint((rnormals(i,imo)*

* (anoms(ilon(i),ilat(i))+1000)/1000)/rmult)

V = N(A+1000)/1000*rmult

Not happy with that anyway. The multiplicative factor.. should that be there at all?

Now, if these are 'genuine' percentage anomalies - ie, they represent the percentage change from the mean,

then the formula to convert them using unscaled normals would be:

V = N + N(A/100)

For instance, -100 would give V = 0, and 100 would give V = 2N. Now if the normals are *10, surely the

results will be *10 too? As each term has N as a multiplicative factor anyway. This makes

me wonder about the prevailing theory that the anomalies need to be *10. They are

fractions of the normal, so it shouldn't matter whether the normal is real or *10. The

variability should be the same (ie, the variability of the anomalies).

So, a run (just for 1980) of glo2abs, then, using the following formula:

absgrid(ilon(i),ilat(i)) = nint(rnormals(i,imo) +

* (anoms(ilon(i),ilat(i)) * rnormals(i,imo)) / 100)

V = N + A*N/100

This should give integer values of mm*10 (because the normals are mm*10 and uncorrected).

So, working backwards, what *should* the original anomalising routine in anomdtb.f90 look

like? It should look like this:

A = 100*(V-N)/N, or: A = 100(V/N) - 100, or: A = 100((V/N)-1)

The last version is REMARKABLY similar to the original (A = 1000((V/N)-1)), in fact I

think we can call equivalence if V and N are in mm? Complicated. The '100' is not a

scaling factor, it's the number that determines a percentage calculation. If we use

1000, what are we saying? The percentage anomalies we would have got are now 10x higher.

Where V=0, A will be -1000, a meaningless percentage anomaly for precip. That's where

the CheckVariSuffix factor pops up, multiplying by 0.1 and getting us back where we

started.

So.. Tim's original looks right, once you understand the correction factor applied later.

In which case, my original algorithm should have worked!!

A slightly Heath-Robinson attempt to verify.. extracted the cell for Wick from the 1980

output. Wick is 58.45N, 3.08W - I reckon that's (297,353). I get:

106

69

91

22

16

74

79

80

129

156

177

151

The station says:

1980 763 582 718 153 95 557 587 658 987 1162 1346 1113

**sigh** Yes, they do follow a similar shape.. it's just that the original station data

are much higher. In terms of Up/Down (following direction):

Station DUDDUUUUUUD

Gridded DUDDUUUUUUD

So, once again I don't understand statistics. Quel surprise, given that I haven't had any

training in stats in my entire life, unless you count A-level maths.

Normals for the same cell:

Grid-ref= 353, 297

1060 740 870 600 610 670 700 910 990 1070 1200 1110

Now, these look like mm*10, the same units as the station data and what I expected. If I

apply the anomalies to these normals, it looks like I'll get what I'm after.. the trouble

is that glo2abs.for deliberately works with real values and so applies the factor in the

clim header before using the values. I just don't think that works with percentages.. hmm.

Actually, it does. I ran through the algorithms and because the normal is multiplicative,

you can do the scaling before or after. In other words, if V' is V produced with scaled

normals (N*0.1) then we do end up with V = 10V'. So I just need to include the factor in

the final equation:

V = (N + A*N/100)/F

Ran it, and the results were good. So - as it's the only change - I won't have to regrid

precip after all! Just re-run from glo onwards.. did so, then used the old (but working)

version of makegrids to produce the gridded ASCII and NetCDF files.

So.. comparisons. Well I want to compare with both 2.0 and 2.1, because they do differ. So

I will need to convert 2.0 to regular-gridded, as I did with 2.1. If I could only remember

the program I wrote to do it!!

**

Go on to part 35k, back to index or Email search