veda.ng
Back to Glossary

Chain-of-Thought Prompting

Chain-of-thought prompting is a technique where you instruct a language model to reason step by step before giving a final answer. ' The model then reasons through intermediate steps before arriving at an answer. The final answer is more accurate because the model caught and corrected its own reasoning mid-process. This technique works because LLMs generate text sequentially.

When forced to articulate intermediate reasoning, the model effectively proofreads its own logic before committing to a conclusion. Research shows chain-of-thought prompting greatly improves performance on math, logic, and multi-step reasoning tasks. Zero-shot chain-of-thought simply adds 'Let's think step by step' to a prompt.

Few-shot chain-of-thought provides example reasoning chains before the actual question. Modern reasoning models like o1 and o3 apply chain-of-thought internally before producing output, which is why they are slower but more accurate on complex problems.