My task for our AI overlords was simple: help me crack the New York Times Spelling Bee.

I had spent a large chunk of a Saturday evening trying to shape the letters G, Y, A, L, P, O and N into as many words as possible. But three hours, 141 points and 37 words — including “nonapology”, “lagoon” and “analogy” — later, I had hit a wall. A few more words was all I needed to propel myself into Spelling Bee’s “genius” echelon, the title reserved for those who unscramble 70 percent of all possible words using the given letters, and the point at which the puzzle considers itself, effectively, solved. (100 percent solvers are bestowed the rank of “queen bee,” but even I have limits.)

My human mind was clearly struggling, but this task seemed like child’s play for AI, so I fired up ChatGPT, told it I was trying to win the Spelling Bee, gave it my letters and laid out the rules. “Give me a list of every word that can be formed with them,” I typed. “You’re allowed to repeat letters, but each word must contain ‘N’.”

When I hit enter, this is what ChatGPT spat back:

Here is a list of words formed with the letters “G Y A L P O N” that must contain the letter “N”:
GLNPAYO
YPNL
PGNOYL
ONAPYLG
LGNAPYO
ANLY
NAYGOPL
GYPON
AOGPLNY
PNAOGLY
This is just a sample of the list due to the extensive number of permutations possible.

Not a great start.

“The words need to be in the dictionary,” I typed back. “Sure,” came the response. “I’ll provide a list of valid English words based on the given letters.” ANGLY, GALON, LANG.

Clearly, this wasn’t working, so I turned to Microsoft’s Copilot (YANG, PLAYING, PLANNING, ANNOYINGLY), Google’s Gemini (GAPON, GON, GIAN), and Anthropic’s Claude (MANGO, ONGOING, LAWN17.LAY). Meta AI helpfully told me that it made sure to only include words that are recognized by dictionaries in a list that contained NALYP and NAGY, while Perplexity — a chatbot with ambitions of killing Google Search — simply wrote GAL hundreds of times before freezing abruptly.

Perplexity sucked at solving the Spelling BeePerplexity sucked at solving the Spelling Bee

Perplexity, a chatbot with ambitions of killing Google Search, went to pieces when asked to form words from a set of letters. (Screenshot by Pranav Dixit / Engadget)

AI can now create images, video and audio as fast as you can type in descriptions of what you want. It can write poetry, essays and term papers. It can also be a pale imitation of your girlfriend, your therapist and your personal assistant. And lots of people think it’s poised to automate humans out of jobs and transform the world in ways we can scarcely begin to imagine. So why does it suck so hard at solving a simple word puzzle?

The answer lies in how large language models, the underlying technology that powers our modern AI craze, function. Computer programming is traditionally logical and rules-based; you type out commands that a computer follows according to a set of instructions, and it provides a valid output. But machine learning, of which generative AI is a subset, is different.

“It’s purely statistical,” Noah Giansiracusa, a professor of mathematical and data science at Bentley University told me. “It’s really about extracting patterns from data and then pushing out new data that largely fits those patterns.”

OpenAI did not respond on record but a company spokesperson told me that this type of “feedback” helped OpenAI improve the model’s comprehension and responses to problems. “Things like word structures and anagrams aren’t a common use case for Perplexity, so our model isn’t optimized for it,” company spokesperson Sara Platnick told me. “As a daily Wordle/Connections/Mini Crossword player, I’m excited to see how we do!” Microsoft and Meta declined to comment. Google and Anthropic did not respond by publication time.

At the heart of large language models are “transformers,” a technical breakthrough made by researchers at Google in 2017. Once you type in a prompt, a large language model breaks down words or fractions of those words into mathematical units called “tokens.” Transformers are capable of analyzing each token in the context of the larger dataset that a model is trained on to see how they’re connected to each other. Once a transformer understands these relationships, it is able to respond to your prompt by guessing the next likely token in a sequence. The Financial Times has a terrific animated explainer that breaks this all down if you’re interested.

I thought I was giving the chatbots precise instructions to generate my Spelling Bee words, all they were doing was converting my words to tokens, and using transformers to spit back plausible responses. “It’s not the same as computer programming or typing a command into a DOS prompt,” said Giansiracusa. “Your words got translated to numbers and they were then processed statistically.” It seems like a purely logic-based query was the exact worst application for AI’s skills – akin to trying to turn a screw with a resource-intensive hammer.

The success of an AI model also depends on the data it’s trained on. This is why AI companies are feverishly striking deals with news publishers right now — the fresher the training data, the better the responses. Generative AI, for instance, sucks at suggesting chess moves, but is at least marginally better at the task than solving word puzzles. Giansiracusa points out that the glut of chess games available on the internet almost certainly are included in the training data for existing AI models. “I would suspect that there just are not enough annotated Spelling Bee games online for AI to train on as there are chess games,” he said.

“If your chatbot seems more confused by a word game than a cat with a Rubik’s cube, that’s because it wasn’t especially trained to play complex word games,” said Sandi Besen, an artificial intelligence researcher at Neudesic, an AI company owned by IBM. “Word games have specific rules and constraints that a model would struggle to abide by unless specifically instructed to during training, fine tuning or prompting.”

“If your chatbot seems more confused by a word game than a cat with a Rubik’s cube, that’s because it wasn’t especially trained to play complex word games.”

None of this has stopped the world’s leading AI companies from marketing the technology as a panacea, often grossly exaggerating claims about its capabilities. In April, both OpenAI and Meta boasted that their new AI models would be capable of “reasoning” and “planning.” In an interview, OpenAI’s chief operating officer Brad Lightcap told the Financial Times that the next generation of GPT, the AI model that powers ChatGPT, would show progress on solving “hard problems” such as reasoning. Joelle Pineau, Meta’s vice president of AI research, told the publication that the company was “hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan…to have memory.”

My repeated attempts to get GPT-4o and Llama 3 to crack the Spelling Bee failed spectacularly. When I told ChatGPT that GALON, LANG and ANGLY weren’t in the dictionary, the chatbot said that it agreed with me and suggested GALVANOPY instead. When I mistyped the world “sure” as “sur” in my response to Meta AI’s offer to come up with more words, the chatbot told me that “sur” was, indeed, another word that can be formed with the letters G, Y, A, L, P, O and N.

Clearly, we’re still a long way away from Artificial General Intelligence, the nebulous concept describing the moment when machines are capable of doing most tasks as well as or better than human beings. Some experts, like Yann LeCun, Meta’s chief AI scientist, have been outspoken about the limitations of large language models, claiming that they will never reach human-level intelligence since they don’t really use logic. At an event in London last year, LeCun said that the current generation of AI models “just do not understand how the world works. They’re not capable of planning. They’re not capable of real reasoning,” he said. “We do not have completely autonomous, self-driving cars that can train themselves to drive in about 20 hours of practice, something a 17-year-old can do.”

Giansiracusa, however, strikes a more cautious tone. “We don’t really know how humans reason, right? We don’t know what intelligence actually is. I don’t know if my brain is just a big statistical calculator, kind of like a more efficient version of a large language model.”

Perhaps the key to living with generative AI without succumbing to either hype or anxiety is to simply understand its inherent limitations. “These tools are not actually designed for a lot of things that people are using them for,” said Chirag Shah, a professor of AI and machine learning at the University of Washington. He co-wrote a high-profile research paper in 2022 critiquing the use of large language models in search engines. Tech companies, thinks Shah, could do a much better job of being transparent about what AI can and can’t do before foisting it on us. That ship may have already sailed, however. Over the last few months, the world’s largest tech companies – Microsoft, Meta, Samsung, Apple, and Google – have made declarations to tightly weave AI into their products, services and operating systems.

“The bots suck because they weren’t designed for this,” Shah said of my word game conundrum. Whether they suck at all the other problems tech companies are throwing them at remains to be seen.

How else have AI chatbots failed you? Email me at [email protected] and let me know!

Update, June 13 2024, 4:19 PM ET: This story has been updated to include a statement from Perplexity.

Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *