Most of us think of AI, especially Large Language Models (LLMs) like ChatGPT, as machines that can think and reason like humans. But is that really true?
Apparently not, because even AI experts are still scratching their heads, trying to figure out how these models work.
Mystery Behind How AI Works
At first glance, LLMs look like next-word prediction tools. They take a bunch of words and predict what comes next. It seems simple. And yet, the way they work can sometimes seem like they’re reasoning just like us.
However, the reality is a bit different.
Take this for example: When asked to multiply a number by 1.8 and then add 32, GPT-4, a famous LLM, gets it right about half the time. Why? Because that’s how you convert centigrade to Fahrenheit. But change those numbers up a bit, and it’s left clueless.
Strange, right? Especially when even kids can easily adjust to such tweaks.
Millions of folks use ChatGPT daily without knowing anything about how AI works. One wonders how many know that these tools have limits?
Generative AI is great for tasks the LLMs have seen before. But throw them a curveball, and they might strike out.
And while this fact is not hidden, not everyone reads the small print. OpenAI’s website does mention, “ChatGPT might get facts wrong.” Some suggest that admonition is too vague (e.g., New York lawyer Stephen A. Schwartz).
The Mind Games of AI
Now, let’s dive into another layer of how AI works on our minds.
Have you ever been told something and then found it hard to think any other way? That’s a bit how AI works with us. Our beliefs and what we’re told can shape how we see and trust AI tools (at least, that’s what recent research shows).
Research on how AI works
People were set to chat with a mental health bot. Before they started, they were told different things. Some were told the bot was caring. Others heard it was out to trick them. And a few were told nothing about the bot’s goals. The twist? All of them chatted with the same bot.
How AI works can fool us
Most folks who thought the bot cared felt it did. Those who were told the bot had no goals felt the same. And here’s the kicker: Only a few people warned about the bot trying to trick them and felt it was sneaky.
What does this tell us? People like to make up their own minds. Even with a heads-up, they want to see for themselves. But it also shows that our views on AI can be shaped easily.
A simple hint can change how we see and trust these tools.
Conclusion
AI is complex. We’re still learning about it, even those at the top of the game.
But one thing’s clear: our minds can shape our AI experiences in ways we might not even realize.
And if you’re a lawyer, you might want to check this out.
Discover how to transform your law practice with the 5 Tech Pillars.