For technology adopters looking for the next big thing, ''agentic AI'' is the future. At least, that's what the marketing pitches and tech industry T-shirts say.
What makes an artificial intelligence product ''agentic'' depends on who's selling it. But the promise is usually that it's a step beyond today's generative AI chatbots.
Chatbots, however useful, are all talk and no action. They can answer questions, retrieve and summarize information, write papers and generate images, music, video and lines of code. AI agents, by contrast, are supposed to be able to take actions autonomously on a person's behalf.
If you're confused, you're not alone. Google searches for ''agentic'' skyrocketed from near obscurity a year ago to a peak this fall. Merriam-Webster hasn't added it to the dictionary but lists ''agentic'' as a slang or trending term defined as: ''Able to accomplish results with autonomy, used especially in reference to artificial intelligence.''
A new report Tuesday by researchers at the Massachusetts Institute of Technology and the Boston Consulting Group, who surveyed more than 2,000 business executives around the world, describes agentic AI as a ''new class of systems'' that ''can plan, act, and learn on their own.''
''They are not just tools to be operated or assistants waiting for instructions,'' says the MIT Sloan Management Review report. "Increasingly, they behave like autonomous teammates, capable of executing multistep processes and adapting as they go.''
How to know if it's an AI agent or just a fancy chatbot
AI chatbots — such as the original ChatGPT that debuted three years ago this month — rely on systems called large language models that predict the next word in a sentence based on the huge trove of human writings they've been trained on. They can sound remarkably human, especially when given a voice, but are effectively performing a kind of word completion.