Reader discretion is advised!
A good Sus-Saturday to all. In this article, I drag and drown you with my paranoia about AI. Let the mass psychosis begin!
What are AI hallucinations?
AI hallucinations are incorrect or misleading results that AI models generate.
Story Time
I was looking for "colour quantization algorithms" for a project (I put these words together in my brain and it was an actual thing!). As the name suggests, I wanted to reduce the number of distinct colours in an image. Anyway, I googled it and all the answers were about the K-means clustering algorithm.
Now, this algorithm is quite slow for my use case, So I went to chatGPT for the same, I spit out 5 different algorithms and out of them I selected the "Median Cut" algorithm and asked to write a code in Python, It returned a code using Pillow Library, I asked to re-write the code in using OpenCV and numpy, and it returned another piece of code.
These 2 codes had slightly different implementations. None of them ran in the first go, but after some refactoring, they both yielded results! Different Results!
Okay, I already had the Algo's name, so I googled "median cut algorithm python" and I found a Github link, 2nd result. This code had different implementations! I can easily find papers and read articles about this algorithm and my issue will be resolved.
That's where the paranoia began!
What if this code is AI-generated? It is a 5-year-old code, so probably isn't. But 5 years down the line, Github will be full of mediocre code, full of misunderstood concepts. I see new junior engineers using ChatGPT daily to write small components of code. If there is some dumb code, PR reviewers will catch them. But After 7 years they will be managers and PR reviewers. What then?
Today, every undergrad uses ChatGPT for his/her thesis. Heck, If I had access to it, I would have used it too. When these undergrads will do their PhD, will they use ChatGPT? Yes. From generating topics to writing abstracts and conclusions, even bibliographies, GPT can do it all.
What about checking these papers? One can certainly make AI that could check these. Even if there was a law, that human review is required for any scientific report or article. A Lazy-Smart person will try and automate it. Guess what they would use? AI or Maths?
We can just hope that any hallucinated or out-of-context/sarcastic reference which is interpreted as useful by Students' AI is caught with Teachers' AI.
BTW, This is what ChatGTP 3.5 has to say about it:
> AI Hallucinations will lead to a mediocre future
While AI hallucinations can be intriguing from a research perspective, they don't necessarily imply a "mediocre future" In fact, AI holds immense potential to revolutionize various fields, from healthcare to transportation, and from education to entertainment.
AI being biased about AI is probably not ideal, I want my AI to be an insecure nerd (like me), not a confident politician/cult leader.
Conclusion
With the invention of the Pen, We gained so much but We lost the power of memorization. With AI, rapid development is imminent, But I feel the loss will be far too much.
We lost our muscles and survival instincts when we left the caves. Heck, Without a flint stone, It would take me ages to light a fire in camp. With mediocre code and misunderstood whitepapers, It seems the time for our brains to regress/halt has come.
What Next?
What Next? Nothing mate! Even with all this said, AI will not stop, Skynet is inevitable. Let me give you a taste of what I feel, Ask yourself a simple question, Who wrote this article?