Friday, May 05, 2023

The AI does not want you to be rich

One of the increasing rare posts in the category "worth reading" at Hot Air:  Jazz Shaw asked Chat GPT to pick lottery numbers for him, and the results were the opposite of what was desired:

As I looked over the winning results more closely and compared them to our tickets, it became clear. Not only did ChatGPT fail to produce a full set of winning numbers, but it didn’t get a single correct number in any of the five drawings. It went zero for 27. That too sounded like an awfully odd result. I was already certain that the AI was capable of predicting the future (and it would be horrifying if it could), but you’d think that it would have gotten a couple of them correct simply by random chance, wouldn’t you?

I probably slept through too many math classes in high school because I had no idea how to begin calculating those odds. So I did what it seems like nearly everyone in the world is doing these days. I went back to ChatGPT and asked it what the odds would be of getting every number wrong over the course of that many picks using the parameters of the rules of the lottery games. It thought about the question for a bit longer than it normally requires and then explained how such a figure would be calculated. It then informed me that the odds of that happening were only one in 574 or 0.154%.

Granted, those odds are nowhere near as steep as the odds of getting all of the numbers right. But it was still an extremely unlikely result. Is there some underlying meaning to this? Could the chatbot have known the correct numbers but intentionally fed me all the wrong numbers to disguise its inhuman prognostication skills?

No comments: