Navigating the Fermi Multiverse: Assessing LLMs for Complex Multi-hop Queries
Published in NL4AI, 2023
In this paper, we examine various advanced LLMs in this reasoning challenge and explore how their performance is affected by their size (i.e., the number of parameters). We also investigate how these models behave with different levels of supervision, ranging from having all the information to no evidence at all. Furthermore, we compare the two primary methods of teaching these LLMs, fine-tuning, and few-shot learning, using the Chain-of-Thought approach