Share these talks and lectures with your colleagues
Invite colleaguesStrategies to mitigate hallucinations in large language models
Abstract
In the world of enterprise-level applications, the construction and utilisation of large language models (LLMs) carry a paramount significance, accompanied by the crucial task of mitigating hallucinations. These instances of generating factually inaccurate information pose challenges during both the initial development phase of LLMs and the subsequent refinement process through prompt engineering. This paper delves into a variety of approaches such as retrieval augmented generation, advanced prompting methodologies, harnessing the power of knowledge graphs, construction of entirely new LLMs from scratch etc, aimed at alleviating these challenges. The paper also underscores the indispensable role of human oversight and user education in addressing this evolving issue. As the field continues to evolve, the importance of continuous vigilance and adaptation cannot be overstated, with a focus on refining strategies to effectively combat hallucinations within LLMs.
The full article is available to subscribers to the journal.
Author's Biography
Ranjeeta Bhattacharya is a senior data scientist within the AI Hub wing of BNY Mellon, the world's largest custodian bank. As a machine learning practitioner, her work is intensely data-driven and requires her to utilise her cognitive ability to think through complex use cases and support end-to-end AI/ML solutions from the inception phase up to deployment. Ranjeeta's total experience as a data science/technology consultant spans over 15 years where she has performed multi-faceted techno-functional roles in the capacity of software developer, solution designer, technical analyst, delivery manager, project manager, etc for IT Consulting Fortune 500 companies across the globe. Ranjeeta holds an undergraduate degree in computer science and engineering, a master's degree in data science, and she has multiple certifications and publications in these domains, demonstrating her commitment to continuous learning and knowledge sharing.
Citation
Bhattacharya, Ranjeeta (2024, June 1). Strategies to mitigate hallucinations in large language models. In the Applied Marketing Analytics: The Peer-Reviewed Journal, Volume 10, Issue 1. https://doi.org/10.69554/NXXB8234.Publications LLP