Announcement

Collapse
No announcement yet.

Jean Dixon and predicting Irma's landfall

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    Jean Dixon and predicting Irma's landfall

    Many of you older folks will remember Jean Dixon, the woman who made a lot of "predictions" ("prophecies" in her scam) and used those she got right to connive money from those who weren't aware of all the ones she got wrong, which was the majority of them.

    I've been watching the reports of Irma's predicted track on most media outlets. Some mention the numerous models that are used to predict the track but most just mention the what is really the average track from all those models. I thought it would be interesting to post a pic showing the "ensemble models". As you can see, the models are all over the map. Even one or two days out from landfall in Florida the models are carpet bombing Cuba, all of Florida and most of the South Eastern part of the United States. As far as weather goes, a hurricane is as orderly as a storm can get. A nice, very large circular pattern rotating counter clockwise with well measured properties, monitored by satellites and by hurricane hunter aircraft flying through them and dropping radiosondes to gather data at all levels. One model gives a track that takes it to just below Newfoundland, and another model has it tracking well above the 50N latitude, well into and above the entrance to the North West Passage! If I were living in anywhere between south of a line from New Orleans to Central Illinois to Kitty Hawk, I'd be looking up relatives to visit for a week or two in the North West part of the country.

    And, just like Jean Dixon, they casually ignore their many wrong predictions while letting others exclaim how accurate they were after the fact. And some of them predict global temperatures 50 years from now.







    And, an esemble model of Irma from August 31st

    Last edited by GreyGeek; Sep 08, 2017, 07:07 PM.
    "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
    – John F. Kennedy, February 26, 1962.

    #2
    I've always had a thing about weather forecasters and economists. They are paid very well to make assessments (calculations) and predictions (forecasts). Then they are paid very well to explain why those predictions were wrong. You don't hear about many who are fired.
    An intellectual says a simple thing in a hard way. An artist says a hard thing in a simple way. Charles Bukowski

    Comment


      #3
      Wouldn't you know it. They edited that 12 day old ensemble (spaghetti) model to just show the current location and predicted path, which is only a two day prediction. The original ensemble showed that an accurate prediction of a storm ten days in advance is still well beyond the power of their models. It speaks volumes about their 50 year models.


      Sent from my iPhone using Tapatalk
      "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
      – John F. Kennedy, February 26, 1962.

      Comment


        #4


        So because hurricane projections are inaccurate, all other projections are inaccurate as well? That is not very logical.

        Comment


          #5
          Originally posted by whatthefunk View Post
          ...

          So because hurricane projections are inaccurate, all other projections are inaccurate as well? That is not very logical.
          Sharp weatherman. Tells it like it is.

          Projections are based on mathematical models, each with their own assumptions, inclusions and exclusions. That's why you get a spaghetti track, or ensemble, with each track representing a single run of a mathematical model. The Canadians posted their own ensemble and so did the EU. All three were significantly different, both in the number of tracks and their projections.

          Dr Edward Lorenz, the MIT guy who first attempted computer modeling of weather back in 1963, found that small variations in data inputs produce large variations in prediction output. He discovered chaos and made the first plot of the Butterfly graph of weather chaos and postulated the limits of weather predictability because of chaos. In his last Interview in 2007 he said he was hoping for an accurate two week projection and was suprised (back then) that meterologists had reached 10 days.
          More than 50 years ago, Massachusetts Institute of Technology (MIT) Professor Edward Lorenz conducted some numerical experiments with a simple 12-variable system representing convective processes. He had begun work on a statistical forecast- ing project, but disagreed with some of the thinking at the time-in particular, that the primarily linear statistical methods could duplicate what the nonlinear methods achieved. He proposed to demonstrate this by performing numerical time integrations of his simple model with his newly acquired desktop computer. On one occasion he wanted to reexamine the results from an earlier simulation. Rather than rerun the simula- tion from the initial state, he decided to pick up the computations partway into the original run by using the printout from the earlier run as the starting point. To his astonishment the new simulation diverged significantly from the original. Eventually he realized that the initial values he used for the second simulation were rounded off from the initial run so that the initial values of his second run were slightly different. The minor differences at initialization were magnified later in the run and led to very different end states. Lorenz (1963) concluded that if the real atmosphere evolved similarly to his numerical simulation, then very long- range prediction would not be possible. If Lorenz's work was valid, then this could significantly alter the course of long-range prediction history. What would Lorenz have to say about that? I requested an interview for the primary purpose of eliciting his views.

          ...

          R.R.-Did you realize the implications of your work at that point?
          E.L.-I never really expected them to spread to so many other fields. I think I realized the implications for meteorology and some meteorologists didn't quite agree with what I had to say, but fortunately Charney did. And he was in a very influential position then. This was at the beginning of the Global Atmospheric Research Program (GARP), and one of the original aims of GARP had been to make two-week forecasts, and this suggested that they might be proved impossible before we even got started. So we were able to change the aim to investigate the feasibility of two-week fore- casts, not promising that they would be possible. Now it begins to look as if the upper limit may be somewhere around two weeks, and I get the feeling that another 20 years or so we may actually be making useful day-to- day forecasts up to the two-week range, though I don't think we are doing it now. But we got up to one week, which I didn't really expect at the time.

          ...

          R.R.-So you have at least been thinking about that problem?
          E.L.-I think the meteorological community accepted the idea of limitations to the forecasts. Of course, the idea wasn't new then. You can find it quite strongly expressed in some of the earlier papers. Particularly one of the papers by Eady around 1950 where he points out that that any forecast given is just one member of a large ensemble of possible forecasts and we have no real reason for selecting among those.

          R.R.-And he was saying that in 1950?

          A study published a year ago by NCAR/UCAR pointed out the variability of weather by comparing 50 models, each varying by 1 trillionth of a degree (no typo) from the previous.
          One/one-trillionth of a degree in the initial global atmospheric temperature input value – an amount so small as to be literally undetectable by modern instruments used to measure air temperatures. Running the simulations for just 50 years – from starting time of 1963 to 2012 – gives results entirely in keeping with Lorenz’ findings: “Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.” *What is the import of Lorenz?

          Literally ALL of our collective data on historic “global atmospheric temperature” are known to be inaccurate to at least +/- 0.1 degrees C. No matter what initial value the dedicated people at NCAR/UCAR enter into the CESM for global atmospheric temperature, it will differ from reality (from actuality – the number that would be correct if it were possible to produce such a number) by many, many orders of magnitude greater than the one/one-trillionth of a degree difference used to initialize these 30 runs in the CESM-Large Ensemble.
          What is beyond dispute is that the AGW crowd has been steadily revisiting historical records and "adjusting" them to fit their model. This after throwing out all the raw temperature data from 1960 and replacing it with "synthetic data".

          If you really want to know what condition the data which was the basis for the 1987 Hockey Stick graph by Mann then take a look at the HARRY_README.TXT in the CRU 2009 zip file the whistleblower released. Just search for the usual profanity.
          "A nation that is afraid to let its people judge the truth and falsehood in an open market is a nation that is afraid of its people.”
          – John F. Kennedy, February 26, 1962.

          Comment

          Working...
          X