Large Language Models (LLMs) are promising technology to support Question Answering on
enterprise SQL data (i.e. Text-to-SQL). Knowledge Graphs are also promising technology to enhance LLM-based question answering by providing business context that LLMs lack. However, it is not well understood to what extent Knowledge Graphs can increase the accuracy of LLM-powered question answering system on SQL databases. Our research aims to understand and quantify this extent. First, we introduce a benchmark comprising an enterprise SQL schema in the insurance domain, a range of enterprise queries encompassing reporting to metrics, and a contextual layer consisting of an ontology and mappings that define a Knowledge Graph. The experimental reveals that question answering using GPT-4, with zero-shot prompts directly on SQL databases, achieves an accuracy of 16%. Notably, this accuracy increases to 54% when questions are posed over a Knowledge Graph representation of the enterprise SQL database. Second, we present an approach that leverages the ontology of the Knowledge Graph to deterministically detect incorrect queries generated by the LLM and repair them. Experimental results show that the accuracy increases to 72.55%, including an additional 8% of “I don’t know” unknown results. Thus, the overall error rate is 20%. The conclusion is that investing in Knowledge Graph provides higher accuracy for LLM powered question answering (View Highlight)