ABOUT THE SPEAKER:
Ahmad Pesaranghader is an Applied AI Scientist at CIBC, where he focuses on LLM safety. He holds a Ph.D. in Computer Science from Dalhousie University with a background in Machine Learning and Big Data, and has worked across academia and industry with text, image, and biomedical data.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Large Language Models (LLMs) and Large Reasoning Models (LRMs) hold transformative potential for high-stakes domains such as finance and law, but their tendency to hallucinate poses a critical reliability risk. This talk explores detection strategies, including uncertainty estimation, reasoning consistency checks, and factual validation, paired with mitigation approaches such as knowledge grounding, prompt engineering, and confidence calibration. What sets this discussion apart is the emphasis on root cause awareness: by categorizing hallucination sources into model, data, and context-related factors, detection and mitigation strategies can be precisely matched to the underlying cause rather than applied as generic fixes.
WHAT YOU’LL LEARN:
Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.
Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.
Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.
Ignite what is an Ignite Talk?
Ignite is an innovative and fast-paced style used to deliver a concise presentation.
During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.
The result is a fun and engaging five-minute presentation.
You can see all our speakers and full agenda here