Microsoft researchers have developed an Algorithm of Thoughts (AoT) which is a paradigm shift in how AI solves problems.
The AoT was developed to make LLMs think more like humans do and to become more efficient at problem-solving. Microsoft claims that its new approach combines the “nuances of human reasoning and the disciplined precision of algorithmic methodologies.”
The current Chain of Thought process that LLMs like ChatGPT use, relies on statistical patterns to go from prompt to output. It’s a very linear progression from problem to solution as the LLM breaks the solution up into smaller steps.
The problem with this approach is that the training data isn’t always sufficient, so sometimes there are some steps missing. When this happens the LLM gets creative and hallucinates to fill in the gaps with an incorrect response.
A more advanced technique that some LLMs use is to find a solution using the Tree of Thought approach. The LLM follows multiple linear paths from problem to solution and stops when it hits an unviable solution.
But this involves a lot of queries and is very memory and computing resource-hungry.
What makes AoT better?
With AoT the algorithm evaluates the first steps of a potential solution and decides early on if an approach is worth pursuing or not. This prevents it from heading stubbornly down an obviously wrong path and then having to make something up.
Also, instead of a linear approach, AoT gives the LLM the ability to search through multiple potential solutions and even backtrack where necessary. Instead of starting at the beginning again when it hits a dead end, it can go back to the previous step and keep exploring.
The current approach LLMs use is like driving from your home to your destination, getting lost, and then heading back home to try a different route. The AoT approach is to just head back to your last wrong turn and try a new route from that point.
This “in-context learning” approach enables the model to be a lot more structured and systematic in the way it solves problems. It’s a lot less resource-hungry and could potentially eradicate the problem LLMs have with hallucinating.
Even with this novel approach, AI is still some way off from actually thinking and reasoning the way humans do. With AoT, it seems like a significant step has been made in that direction though.
One conclusion the researchers made from their experiments was that their “results suggest that instructing an LLM using an algorithm can lead to performance surpassing that of the algorithm itself.”
That’s exactly what our brains do. We have the inherent ability to learn new skills that we didn’t know before. Imagine if a tool like ChatGPT was able to learn through reasoning without the need for further training.
This new approach could also lead to AI being more transparent in its “thinking” process, giving us an insight into what’s actually happening behind the code.