Reproducibility & Data Version Control
for LangChain & LLM/OpenAI Models

FREE Virtual Workshop
Nov. 29th,1PM EST

Proudly Sponsored By

Presenter: Amit Kesarwani
Director of Solution Engineering
lakeFS by Treeverse

Event Speakers

Event Conference Chair

Click Speaker to Read Abstract

Suhas Pai

CTO, Hudson Labs
Talk Title: Making RAG (Retrieval Augmented Generation) Work

Keynote

Gautam Kamath

Gautam Kamath

Assistant Professor/Faculty Member and Canada CIFAR AI Chair, University of Waterloo
Talk Title: Privacy Risks and Protections in Machine Learning Systems

Advanced Technical/Research

Jullian Yapeter

Jullian Yapeter

Machine Learning Scientist, Signal 1
Talk Title: AI for Hospitals at Scale

Estefania Barreto

Estefania Barreto

ML Engineer, Recursion Pharmaceuticals
Talk Title: Industrializing ML Workflows in Drug Discovery

Amir Hossein Karimi

Assistant Professor, University of Waterloo
Talk Title: Advances in Algorithmic Recourse: Ensuring Causal Consistency, Fairness, & Robustness

Marija Stanojevic

Marija Stanojevic

Lead Applied Machine Learning Scientist, EudAImonia Science
Talk Title: Machine Unlearning: Addressing Bias, Privacy, and Regulation in LLMs and Multimodal Models

John Jewell

John Jewell

Applied Machine Learning, Vector Institute
Talk Title: FL4Health: Private and Personal Clinical Modeling

Ehsan Amjadian

Ehsan Amjadian

Head of AI Acceleration, RBC
Talk Title: Arcane, An Internal Rag System to Pinpoint Investment Policies

Jekaterina Novikova

Jekaterina Novikova

Science Lead, AI Risk and Vulnerability Alliance
Talk Title: The Dual Nature of Consistency in Foundation Models: Challenges and Opportunities

Madhav Singhal

Madhav Singhal

AI Engineer and Researcher, Replit
Talk Title: Transitioning from LLMs to Autonomous Agents in Programming and Software Engineering

Andrew Ling

Andrew Ling

VP, Compiler Software, Groq
Talk Title: Extending PyTorch for Custom Compiler Targets

Bowen Yang

Bowen Yang

Member of Technical Staff, Cohere
Talk Title: Unraveling Long Context: Existing Methods, Challenges, and Future Directions

En-Shiun Annie Lee

En-Shiun Annie Lee

Assistant Professor, Ontario Tech University
Talk Title: ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

Kusei Uemura

Kusei Uemura

Undergraduate, Ontario Tech University
Talk Title: ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

David Anugraha

David Anugraha

Undergraduate, Ontario Tech University
Talk Title: ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

Jeremy Bradbury

Jeremy Bradbury

Professor, Ontario Tech University
Talk Title: ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

Shagun Sodhani

Shagun Sodhani

Tech Lead, Meta
Talk Title: Torch.func: Functional Transforms in PyTorch

Ankit Pat

Ankit Pat

Lead Machine Learning Applied Scientist, Genesys
Talk Title: Exploring the Frontier of Graph Neural Networks: Key Concepts, Architectures, and Trends

Arash Taheri-Dezfouli

Arash Taheri-Dezfouli

Compiler Engineer, Groq
Talk Title: Extending PyTorch for Custom Compiler Targets

Business Strategy

Angela Xu

Angela Xu

Director, Risk Control and Fraud Analytics, CIBC
Talk Title: Revolutionizing Fraud Prevention: Harnessing AI and ML to Safeguard Banking from Fraud

Kemi Borisade

Kemi Borisade

Senior Fraud Data Analyst, CIBC
Talk Title: Revolutionizing Fraud Prevention: Harnessing AI and ML to Safeguard Banking from Fraud

Emerson Taymor

Emerson Taymor

SVP, Design, InfoBeans
Talk Title: GenAI: A New Renaissance in Product Development

Patrick Tammer

Patrick Tammer

Senior Investment Director, Scale AI
Talk Title: Successfully Integrating Ai in Your Strategy and Business Operations – Lessons Learnt from Investing

Jaime Tatis

Jaime Tatis

VP-Chief Insights Architect, TELUS
Talk Title: How Is GenAI Reshaping the Business?

Sasha Luccioni

Sasha Luccioni

AI and Climate Leader, Hugging Face
Talk Title: Connecting the Dots Between AI Ethics and Sustainability

Monish Gandhi

Monish Gandhi

Founder, Gradient Ascent Inc
Talk Title: Connecting the Dots Between AI Ethics and Sustainability

Deval Pandya

Deval Pandya

Vice President of AI Engineering, Vector Institute
Talk Title: Connecting the Dots Between AI Ethics and Sustainability

Mandy Wu

Mandy Wu

Senior Software Development Manager, Wealthsimple
Talk Title: GenAI for Productivity?

Arthur Vitui

Arthur Vitui

Senior Data Scientist Specialist Solutions Architect, Red Hat Canada
Talk Title: Deploying LLMs on Kubernetes Environments

Reem Al-Halimi

Reem Al-Halimi

AI Enterprise Architect, Navblue, An Airbus Company
Talk Title: Unlocking the Potential of Data in the Aviation Industry

Patricia Arocena

Patricia Arocena

Senior Director and Head, Generative AI Innovation Labs, RBC
Talk Title: Generative AI for Financial Services

John Bolton

John Bolton

Director of Engineering, Generative AI Innovation Labs, RBC
Talk Title: Generative AI for Financial Services

Margo Wu

Margo Wu

Lead Investor, Georgian
Talk Title: GenAI Investing in 2024

Case Study

Nassim Tayari

Nassim Tayari

Watsonx Canada Leader, IBM Canada
Talk: AI Governance: Accelerate Responsible, Transparent, and Explainable AI Workflows

Iddo Avneri

Iddo Avneri

VP Customer Success, lakeFS
Workshop: Building Reproducible ML Processes with an Open Source Stack

Patrick Halina

Patrick Halina

Machine Learning Scientist, Pinterest
Talk Title: Web Extraction With LLMs

Yannick Lallement

Chief AI Officer, Scotiabank
Talk Title: Gen AI in Banking: Lessons Learned

Gayathri Srinivasan

Senior AI/ML Product Manager, Wattpad
Talk Title: Optimizing Recommendations on Wattpad Home

Abhimanyu Anand

Data Scientist, Wattpad
Talk Title: Optimizing Recommendations on Wattpad Home

Michael Havey

Senior Solutions Architect, Amazon Web Services
Talk Title: Ask the Graph: How Knowledge Graphs Helps Generative AI Models Answer Questions

Bhuvana Adur Kannan

Bhuvana Adur Kannan

Lead - Agent Performance & ML Platform, Voiceflow
Talk Title: Building and Evaluating Prompts on Production Grade Datasets

Yoyo Yang

Machine Learning Engineer, Voiceflow
Talk Title: Building and Evaluating Prompts on Production Grade Datasets

Winston Li

Winston Li

Founder, Arima
Talk Title: Dynamic Huff's Gravity Model with Covariates for Site Visitation Prediction

Debadyuti Roy Chowdhury

VP Product, InfinyOn
Talk Title: Why Real-Time Event Streaming Pattern is Indispensable for an AI Native Future

Josh Peters

Josh Peters

Data Science Manager, Wealthsimple
Talk Title: LLMs for Revolutionizing Credit Risk Assessment

Kiarash Shamsi

Kiarash Shamsi

ML Researcher, Wealthsimple
Talk Title: LLMs for Revolutionizing Credit Risk Assessment

Irena Grabovitch-Zuyev

Irena Grabovitch-Zuyev

Staff Applied Scientist, PagerDuty
Talk Title: Rapid Deployment of LLMs into Production: Strategies and Insights

Suchita Venugopal

Suchita Venugopal

Senior Machine Learning Engineer, PagerDuty
Talk Title: Rapid Deployment of LLMs into Production: Strategies and Insights

Susan Chang

Susan Chang

Principal Data Scientist, Elasticsearch
Talk Title: Growing your ML Career via Technical Writing and Speaking: Tips and Lessons

Kathryn Hume

Kathryn Hume

Vice President, Digital Channels Technology, RBC
Talk Title: Upskilling Your Full-Stack Development Team in Machine Learning

Nijan Giree

Nijan Giree

Director Mobile Development, Digital, RBC
Talk Title: Upskilling Your Full-Stack Development Team in Machine Learning

Arup Saha

Arup Saha

Director, Android Development, RBC
Talk Title: Upskilling Your Full-Stack Development Team in Machine Learning

Alex Lau

Alex Lau

Senior Director, Android and Mobile Services Development, RBC
Talk Title: Upskilling Your Full-Stack Development Team in Machine Learning

Rajat Arya

Rajat Arya

Co-Founder, XetHub
Talk Title: AI As An Engineering Discipline

Christian Calderon

Christian Calderon

MLOps & Deployment Engineer, Zapata AI
Talk Title: AI-ready Data Infrastructure for Real-time Sensor Data Analytics on the Edge

Tina Shen

Tina Shen

Machine Learning Engineer, Loblaw Digital
Talk Title: Optimizing Personalized User Experience: In-session Recommendations Across E-commerce Verticals

Charles Zhu

Charles Zhu

Machine Learning Engineer, Loblaw Digital
Talk Title: Optimizing Personalized User Experience: In-session Recommendations Across E-commerce Verticals

Rob Levy

Rob Levy

Staff Engineer, Lightning AI
Talk Title: Deploying and Evaluating RAG pipelines with Lightning Studios

Panel Discussion

Everaldo Aguiar

Everaldo Aguiar

Senior Engineering Manager, PagerDuty
Talk Title: RAGs in Production: Delivering Impact Safely and Efficiently

Wendy Foster

Wendy Foster

Data Products Leader, Shopify
Talk Title: RAGs in Production: Delivering Impact Safely and Efficiently

Margaret Wu

Margaret Wu

Senior Data Scientist, Advanced Analytics and AI, CIBC
Talk Title: RAGs in Production: Delivering Impact Safely and Efficiently

Christopher Parisien

Christopher Parisien

Senior Manager, Applied Research, NVIDIA
Talk Title: RAGs in Production: Delivering Impact Safely and Efficiently

In-Person Workshops

Ian Yu

Ian Yu

Machine Learning Engineer, Groupby Inc
Workshop: The Gap From Prototype to Production: Lessons Learned from Implementing Applications with LLMs

Shashank Shekhar

Shashank Shekhar

Co-Founder, Dice Health
Workshop: A Practitioner's Guide To Safeguarding Your LLM Applications

Prashanth Rao

AI Engineer, Kùzu, Inc.
Workshop: Kùzu - A Fast, Scalable Graph Database for Analytical Workloads

Royal Sequeira

Royal Sequeira

Machine Learning Engineer, Georgian
Workshop: Optimizing Large Language Model Selection for Efficient GenAI Development

Aslesha Pokhrel

Aslesha Pokhrel

Machine Learning Engineer, Georgian
Workshop: Optimizing Large Language Model Selection for Efficient GenAI Development

Christopher Tee

Christopher Tee

Software Engineer, Georgian
Workshop: Optimizing Large Language Model Selection for Efficient GenAI Development

Myles Harrison

Myles Harrison

Consultant & Trainer, NLP from Scratch
Workshop: Getting started with Generative Text and Fine-tuning LLMs in Hugging Face

Greg Loughnane

Co-Founder, AI Makerspace
Workshop: Building an Open-Source Agentic RAG Application with Llama 3

Chris Alexiuk

Chris Alexiuk

Co-Founder & CTO, AI Makerspace
Workshop: Building an Open-Source Agentic RAG Application with Llama 3

Hamza Farooq

Hamza Farooq

CEO & Founder, Traversaal.ai
Workshop: LLMs for Leaders & Senior Product Managers

Yizhi Yin, PhD

Yizhi Yin, PhD

Senior Solutions Engineer, Neo4j
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs

Virtual Workshops & Talks

Rohit Saha

Machine Learning Scientist, Georgian
Workshop: Leveraging Large Language Models to Build Enterprise AI

Kyryl Truskovskyi

Kyryl Truskovskyi

Founder, ML Engineer, Kyryl Opens ML
Workshop: Leveraging Large Language Models to Build Enterprise AI

Benjamin Ye

Benjamin Ye

Machine Learning Scientist, Georgian
Workshop: Leveraging Large Language Models to Build Enterprise AI

Angeline Yasodhara

Angeline Yasodhara

Machine Learning Engineer, Georgian
Workshop: Leveraging Large Language Models to Build Enterprise AI

Mahdi Torabi Rad

Mahdi Torabi Rad

President, MLBoost
Workshop: Uncertainty Quantification with Conformal Prediction: A Path to Reliable ML Models

Amit Kesarwani

Amit Kesarwani

Director, Solution Engineering, lakeFS
Workshop: From Chaos to Control: Mastering ML Reproducibility at Scale

Ville Tuulos

Ville Tuulos

Co-Founder, Outerbounds
Workshop: Building a Production-Grade Document Understanding System with LLMs

Eddie Mattia

Eddie Mattia

Data Scientist, Outerbounds
Workshop: Building a Production-Grade Document Understanding System with LLMs

Aniket Maurya

Developer Advocate, Lightning AI
Workshop: AI Agents with Function Calling/Tool Use

Krishnachaitanya Gogineni

Krishnachaitanya Gogineni

Principal ML Engineer, Observe.AI
Talk Title: Generative AI Design Patterns

Liz Lozinsky

Liz Lozinsky

Engineering Manager, Gen Ai Platform Team, TELUS
Talk Title: Fuel iX: An Enterprise Grade Gen AI platform

Sara Ghaemi

Sara Ghaemi

Senior Software Developer, Gen Ai Platform Team, TELUS
Talk Title: Fuel iX: An Enterprise Grade Gen AI platform

Patrick Marlow

Patrick Marlow

Staff Engineer, Vertex Applied AI Incubator, Google
Talk Title: Agentic AI: Unlocking Emergent Behavior in LLMs for Adaptive Workflow Automation

Narcisse Torshizi

Narcisse Torshizi

Data Scientist/Data Science Manager, Scotiabank
Talk Title: AI for AI-Scotiabank's Award-Winning ML Models

Andres Villegas

Andres Villegas

Data Scientist Manager, Scotiabank
Talk Title: AI for AI-Scotiabank's Award-Winning ML Models

Zain Hasan

Zain Hasan

Senior ML Developer Advocate, Weaviate
Talk Title: Scaling Vector Database Usage Without Breaking the Bank: Quantization and Adaptive Retrieval

Adam Kerr

Adam Kerr

Senior Machine Learning Engineer, Bell Canada
Talk Title: Modular Solutions for Knowledge Management at scale in RAG Systems

Lyndon Quadros

Lyndon Quadros

Senior Manager, Artificial Intelligence, Bell Canada
Talk Title: Modular Solutions for Knowledge Management at scale in RAG Systems

Meryem Arik

Meryem Arik

CEO, TitanML
Talk Title: Navigating LLM Deployment: Tips, Tricks and Techniques

Lightning Talks

Vik Pant

Vik Pant

Partner and Chief Data Scientist, PwC Canada
Talk Title: From Concept to Value: Framework for Designing Generative Applications for the Enterprise

Alex Cui

Alex Cui

CTO & Co-Founder, GPTZero
Talk Title: Detecting AI-generated Content and Verifying Human Content with GPTZero

Avin Regmi

Avin Regmi

Engineering Manager ML, Spotify
Talk Title: Compute Strategies for Generative AI

More speakers to be announced

Agenda

This agenda is still subject to changes

Loading...

Talk Title: Making RAG (Retrieval Augmented Generation) Work

Presenter:
Suhas Pai, CTO, Hudson Labs

About the Speaker:
Suhas Pai is a NLP researcher and co-founder/CTO at Hudson Labs a Toronto based startup. At Hudson Labs, he works on text ranking, representation learning, and productionizing LLMs. He is also currently writing a book on Designing Large Language Model Applications with O’Reilly Media. Suhas has been active in the ML community, being the Chair of the TMLS (Toronto Machine Learning Summit) conference since 2021 and also NLP lead at Aggregate Intellect (AISC). He was also co-lead of the Privacy working group at Big Science, as part of the BLOOM open-source LLM project.

Talk Track: Applied Case Studies

Talk Technical Level:  6/7

Talk Abstract:
The RAG (Retrieval Augmented Generation) paradigm drives a large proportion of LLM-based applications. However, getting RAG to work beyond prototypes is a challenging ordeal. In this talk, we will go through some of the common pitfalls encountered when implementing RAG along with techniques to alleviate them. We will showcase how robustness can be built into the design of the RAG pipeline and how to balance them against factors like latency and cost.

What You’ll Learn:
What can go wrong with RAG?

Techniques to alleviate RAG shortcomings – specifically, tightly coupled models, layout and context-aware fine-tuned embeddings, retrieval text refinement, query expansion, and interleaved retrieval.

Talk Title: Privacy Risks and Protections in Machine Learning Systems

Presenter:
Gautam Kamath, Assistant Professor/Faculty Member and Canada CIFAR AI Chair, University of Waterloo

About the Speaker:
Gautam Kamath is an Assistant Professor at the University of Waterloo, and a Faculty Member and Canada CIFAR AI Chair at the Vector Institute for Artificial Intelligence. His research interests are in trustworthy algorithms, statistics, and machine learning, particularly focusing on considerations like data privacy and robustness. He has a B.S. from Cornell University and a Ph.D. from MIT. He is the recipient of the 2023 Golden Jubilee Research Excellence Award. His online course on differential privacy is the most popular resource for learning the topic, with his lecture videos having over 100,000 views. He is an Editor-in-Chief of Transactions on Machine Learning Research, and on the Executive Committee of the Learning Theory Alliance.

Talk Track: Keynote

Talk Technical Level: 4/7

Talk Abstract:
Machine learning models are prone to leaking information about their training data. This can be problematic when the training data is privacy-sensitive information belonging to people. I will highlight several privacy risks of modern machine learning systems and discuss rigorous ways to protect against these vulnerabilities, preserving user privacy and maintaining their trust in the system.

What You’ll Learn:
Privacy risks in modern machine learning systems are increasingly significant. At the same time, there are rigorous protections that allow us to guard against these privacy risks being realized.

Talk Title: AI for Hospitals at Scale

Presenter:
Jullian Yapeter, Machine Learning Scientist, Signal 1

About the Speaker:
Jullian is a Machine Learning Scientist at Signal 1. His focus is at the intersection of model development and ML infrastructure / MLOps. He is an engineer with a BASc. in Mechatronics Engineering from the University of Waterloo, and a M.S. in Computer Science from the University of Southern California. He was a research assistant at the CLVR Lab at USC, working on large-scale Offline RL under Prof. Joseph Lim. Jullian has industry experience working on AI / Computer Vision systems at Disney Imagineering, IBM, and a few different start-ups. Overall, he’s passionate about improving people’s lives through technology.

Talk Track: Research or Advanced Technical

Talk Technical Level:  5/7

Talk Abstract:
An exploration into the technical processes employed at Signal 1 that enables the training and deployment of machine learning models across various hospital settings, including zero-shot learning applications in patient deterioration prediction that generalizes even to unseen hospitals.

This talk will also cover the specifics of our microservice architecture which underpins our system’s capability to consistently deliver timely and effective inference results, enabling scalable, data-driven decisions in patient care.

Attendees will gain insights into the practical challenges and solutions encountered in developing AI applications that can seamlessly integrate into and impact real-world clinical settings.

Whether you’re interested in the nuances of model development, deployment, or the practical implications of AI in healthcare, this session will offer valuable technical knowledge and perspectives.

We invite you to join this technical discourse at the intersection of AI and healthcare, contributing to a dialogue that’s shaping the future of AI applications in medical settings.

What You’ll Learn:
– An overview of the practical challenges in deploying ML in hospitals, such as generalization and scalability
– How we at Signal 1 tackle some of these challenges
– Discussions about some of the problems we’re still working on

Talk Title: Industrializing ML Workflows in Drug Discovery

Presenter:
Estefania Barreto, ML Engineer, Recursion Pharmaceuticals

About the Speaker:
Estefania Barreto-Ojeda is an ML Engineer at Recursion, where she builds and automates machine learning pipelines for drug discovery. A physicist by training, she has a PhD in Biophysical Chemistry from the University of Calgary where she participated in Google Summer of Code as an open source software developer . She has given talks at several major data conferences, including PyData. Estefania is a full time automation fan, an occasional open-source contributor, and a seasonal bicycle lover.

Talk Track: Research or Advanced Technical

Talk Technical Level:  5/7

Talk Abstract:
Recursion is committed to industrialize drug discovery by addressing the complexities of Machine Learning (ML) workflows head-on. A critical step in the drug discovery process is predicting compounds’ properties such as Absorption, Distribution, Metabolism, and Excretion (ADME), Potency and Toxicity among others, which allows the evaluation of a drug candidate for safety and efficacy, crucial for regulatory approval. In order to leverage its large volume of diverse and regularly updated chemical assays datasets, Recursion has engineered standardized and automated solutions to train and deploy predictive models on a weekly basis, thus accelerating the drug discovery process in early stages. In this talk, we will offer a comprehensive overview of our industrialized workflows to develop and deploy ML compound property predictors. Insights into Recusion’s strategy for data management, model training and deployment using both cloud and supercomputing resources will be shared.

What You’ll Learn:
During this presentation, attendees will gain understanding of our structured approach for creating and implementing machine learning models to predict compound properties in an industrial setting. We will explore Recusion’s approach to managing data, training models, and deploying them utilizing a combination of cloud services and supercomputing resources

Talk Title: Advances in Algorithmic Recourse: Ensuring Causal Consistency, Fairness, & Robustness

Presenter:
Amir Hossein Karimi, Assistant Professor, University of Waterloo

About the Speaker:
Dr. Amir-Hossein Karimi is an Assistant Professor in the Electrical & Computer Engineering department at the University of Waterloo where he leads the Collaborative Human-AI Reasoning Machines (CHARM) Lab. The lab’s mission is to advance the state of the art in artificial intelligence and chart the path for trustworthy human-AI symbiosis. In particular, the group is interested in the development of systems that can recover from or amend poor experiences caused by AI decisions, assay the safety, factuality, and ethics of AI systems to foster trust in AI, and effectively combine human and machine abilities in various domains such as healthcare and education. As such, the lab’s research explores the intriguing intersection of causal inference, explainable AI, and program synthesis, among others.

Amir-Hossein’s research contributions have been showcased at esteemed AI and ML-related platforms like NeurIPS, ICML, AAAI, AISTATS, ACM-FAccT, and ACM-AIES, via spotlight and oral presentations, as well as through a book chapter and a highly regarded survey paper in the ACM Computing Surveys. Before joining the University of Waterloo, Amir-Hossein gained extensive industry experience at Meta, Google Brain, and DeepMind and offered AI consulting services worth over $250,000 to numerous startups and incubators. His academic and non-academic endeavours have been honoured with awards like the Spirit of Engineering Science Award (UofToronto, 2015), the Alumni Gold Medal Award (UWaterloo, 2018), the NSERC Canada Graduate Scholarship (2018), the Google PhD Fellowship (2021), and the ETH Zurich Medal (2024).
Talk Track: Research or Advanced Technical

Talk Technical Level: 5/7

Talk Abstract:
Explore the intersection of causal inference and explainable AI applied for fair and robust algorithmic recourse in AI applications across healthcare, insurance, and banking. This session highlights the role of causal consistency in correcting biases and ensuring transparent model decisions.

What You’ll Learn:
– Foundations of Causal Inference: Understand the basics and importance of causal reasoning in AI.
– Integrating Causality in AI Systems: Practical approaches for embedding causal methods to improve fairness and accountability.
– Case Studies: Insights from healthcare, insurance, and banking on implementing causal tools for better decision-making.
– Future Trends: Emerging technologies and methodologies in algorithmic recourse that are setting the stage for more reliable AI systems.

Talk Title: Machine Unlearning: Addressing Bias, Privacy, and Regulation in LLMs and Multimodal Models

Presenter:
Marija Stanojevic, Lead Applied Machine Learning Scientist, EudAImonia Science

About the Speaker:
Marija Stanojevic16, Ph.D. is a Lead Applied Machine Learning Scientist at EudAImonia Science and Ellipsis Health. She focuses on representation learning, multimodal, multilingual, and transfer learning for healthcare. She was a virtual chair of ICLR 2021 and ICML 2021, general chair of the Machine Learning for Cognitive and Mental Health workshop at AAAI 2024, and main organizer of the 9th Mid-Atlantic Student Colloquium on Speech, Language, and Learning
(MASC-SLL 2022). General chair. She worked at Meta, Cambridge Cognition, Winterlight Labs, and LinkedIn.

Talk Track: Research or Advanced Technical

Talk Technical Level: 4/7

Talk Abstract:
This talk discusses machine unlearning for large language models (LLMs) and multimodal models (MMs) handling sensitive data. As these AI models gain traction, ensuring adaptable and ethical practices is paramount, especially in domains handling healthcare, finance, and personal information. Here, we explore the intricacies of machine unlearning dynamics and their impact on bias mitigation, data privacy, legal compliance, and model robustness.

The talk sheds light on recent advancements and seminal research in machine unlearning. Given the growing prevalence of AI regulations and concerns around test data leaks during massive training, machine unlearning emerges as an essential component for ensuring unbiased, compliant, and well-evaluated AI systems. We discuss techniques for identifying unwanted data within models and for removing it while preserving model performance. Additionally, the talk explores methods for evaluating the success of machine unlearning, guaranteeing that the model forgets the targeted data without compromising its overall behavior and performance on other data.

Machine unlearning empowers stakeholders, including customers and data owners, with the ability to withdraw their data and fosters trust in the responsible development and deployment of LLMs and MM models.

What You’ll Learn:
– The importance of machine unlearning in responsible AI: You’ll be able to explain why machine unlearning is crucial for ensuring ethical and adaptable AI practices, particularly for models handling sensitive data.
– The impact of machine unlearning on key aspects of AI development: The talk will investigate how machine unlearning can mitigate bias, enhance data privacy, ensure legal compliance, and improve model robustness.
– Recent advancements and research in machine unlearning: You’ll understand the latest developments and significant research findings in the field of machine unlearning.
– Techniques for identifying and removing data from models: The talk will explore practical methods for determining if specific data resides within a model and how to remove it while maintaining the model’s performance.
– Evaluating the success of machine unlearning: You’ll learn techniques to assess whether the machine unlearning process has been successful, ensuring the model forgets the targeted data without impacting its overall functionality.

Talk Title: FL4Health: Private and Personal Clinical Modeling

Presenter:
John Jewell, Applied Machine Learning, Vector Institute

About the Speaker:
John Jewell is an Applied Machine Learning Specialist at Vector Institute where he is currently focused on building FL4Health – a Python Package to jointly train machine learning models on distributed datasets in the healthcare domain. Prior to joining Vector Institute, John received his Master’s in Computer Science from Western University under the supervision of Vector Institute Faculty Member Yalda Mohsenzadeh. During this time, he was fortunate enough to make strong contributions to the Anomaly Detection literature, an area he is very much still interested in.

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
It is well-established that the robustness and generalizability of machine-learning models typically grow with access to larger quantities of representative training data. However, in the healthcare domain and other industries with highly sensitive data, the vast majority of data exists in silos across different institutions. Centralizing the data is often discouraged, if not impossible, due to strong regulations governing data sharing. This is a fundamental barrier in the development of performant machine learning models in healthcare, and other domains. Fortunately, Federated learning (FL) provides an avenue for training models in distributed data settings without requiring training data transfer. In this talk, we’ll provide an overview of FL and its application in healthcare. This will include a discussion of common challenges arising in distributed data settings, such as data drift and heterogeneity, along with modern approaches aimed at addressing these issues. We’ll introduce the FL4Health library developed at the Vector Institute, which can be leveraged to easily train models on distributed clinical datasets. Finally, we’ll consider some noteworthy experimental results, obtained using the library, demonstrating the utility of FL in training high-performing models in challenging clinical settings.

What You’ll Learn:
Attendees will have the opportunity to learn about FL and how it is used to train performant models on distributed datasets, with a specific focus on clinical tasks. In doing so, attendees will become familiar with common challenges that arise in FL, state-of-the-art techniques to address those challenges and helpful tools to get started.

Talk Title: Arcane, An Internal Rag System to Pinpoint Investment Policies

Presenter:
Ehsan Amjadian, Head of AI Acceleration, RBC

About the Speaker:
Dr. Ehsan Amjadian earned his Ph.D. in Deep Learning & Natural Language Processing from Carleton University, Canada. He is currently the Head of AI and Data at SA&I, where he is leading the development of privacy-respecting AI products for our customers digital journeys, with an emphasis on digital payment.
He has over 15 years of experience in the field of AI. At RBC, he has led numerous advanced AI products from ideation to production and has filed multiple patents in the areas of Data Protection, Finance & Climate, and Computer Vision applications to Satellite Images. Outside of RBC, he has also led various open-source artificial intelligence initiatives as well as a multitude of research teams. He is an adjunct professor of computer science at the University of Waterloo and is published in a variety of additional Artificial Intelligence and Computer Science fields including Cybersecurity, Recommender Engines, Information Extraction, and Computer Vision.

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
In this session we’ll walk the audience through the building blocks of Arcane, a Retrieval-Augmented Generation system to point our specialists to the most relevant policies scattered across an internal web platform in a matter of seconds. It has the potential to boost productivity by orders of magnitude. We will discuss the greatest challenges in building this technology, some of the resulting best practices, as well as the lessons learnt during the endeavor.

What You’ll Learn:
– Methods in building RAG systems in financial institutions
– Top challenges in building such RAG systems
– Lessons learnt and best practices
– Dos and don’ts
– Security, Privacy, and Safety considerations

Talk Title: The Dual Nature of Consistency in Foundation Models: Challenges and Opportunities

Presenter:
Jekaterina Novikova, Science Lead, AI Risk and Vulnerability Alliance

About the Speaker:
Jekaterina Novikova is a Science Lead at the AI Risk and Vulnerability Alliance, where she leads research efforts towards developing responsible and trustworthy AI systems. She has 10+ years of experience working in both research industry and academia, specializing in NLP evaluation and machine learning for health, and focusing on user-focused approaches. Jekaterina has PhD in Computer Science from the University of Bath/UK, and a strong track record of publications in top-level conferences

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
Consistency is an important factor that needs to be present in any trustworthy model. In this talk, I will speak about consistency in LLMs and foundation models, how to measure it, what are the mitigation practices of its negative consequences, and what are the ways to use it to the advantage.

What You’ll Learn:
It is still problematic for LLMs and foundation models to generate consistent outputs. This problem may have important negative consequences and needs to be properly addressed. However, observed inconsistencies may also be used to the advantage and I will present several examples of this.

Talk Title: Transitioning from LLMs to Autonomous Agents in Programming and Software Engineering

Presenter:
Madhav Singhal, AI Engineer and Researcher, Replit

About the Speaker:
Madhav has been a founding member of AI Team at Replit, researching LLMs for code and designing AI agents and systems for Replit’s 20M+ users.

Talk Track: Research or Advanced Technical

Talk Technical Level: 5/7

Talk Abstract:
A technical talk discussing the evolution of LLMs into Agents as applied to programming and software engineering.

What You’ll Learn:
How LLMs for programming and software development have evolved into Agent systems and how they are built.

Will cover:
– The evolution of technical approaches from completion to FIM to tools to action models to end-to-end in-context learning driven agents
– Technical details on data, post-training and in-context learning for turning LLMs into agents for software engineering. (will cover https://blog.replit.com/code-repair as a case study)
– Evaluation of programming and software engineering Agents and use cases
– Insights and learnings from training, post-training, and building Agents in production at Replit for 20M+ users.

Talk Title: Extending PyTorch for Custom Compiler Targets

Presenter:
Andrew Ling, VP, Compiler Software, Groq

About the Speaker:
Andrew Ling received his PhD from the University of Toronto. He has spent over at a decade building compilers in the semiconductor industry for large companies such as Intel andQualcomm and more recently is leading Groq’s compiler effort for its new Deep Learning accelerator.

Talk Track: Research or Advanced Technical

Talk Technical Level: 5/7

Talk Abstract:
Groq has delivered the world’s first LPUs, focused on LLM inference and deep learning acceleration. However, to support inference specific accelerators, metadata on the PyTorch graph, such as custom or unsupported data types, are often required to improve performance of the model. This metadata is non-trivial to get through PyTorch’s graph export systems (e.g. TorchScript + ONNX, torch.compile, torch.export). In order to maximize inference efficiency for custom hardware targets, we present a generalizable technique that allows users to annotate PyTorch code at different granularities with arbitrary information. PyTorch model annotations can be a simple and powerful means to adjust the mapping of the workload to accelerators, yet maintain the semantics of the PyTorch inference graph. Our technique allows easy injection of information into a PyTorch graph at the Python level and easy recovery of the information and semantics during downstream ingestion. We demonstrate how to use this technique to modify existing PyTorch models in place to enable custom data-types and persist precision information through PyTorch into our compiler.

What You’ll Learn:
A generalizable technique that allows users to annotate PyTorch code at different granularities with arbitrary information and modify existing PyTorch models in place to enable custom data-types and persist precision information through PyTorch into Groq compiler.

Talk Title: Unraveling Long Context: Existing Methods, Challenges, and Future Directions

Presenter:
Bowen Yang, Member of Technical Staff, Cohere

About the Speaker:
Work on LLMs, pretraining, long-context and deep-learning engineering.

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
We will discuss how to scale transformer models to longer context, some of the challenges we are facing from modelling and framework perspective and some directions we can explore.

What You’ll Learn:
This panel talk will discuss methods and analysis on long context models and long context extrapolation. We will explore a range of approaches, from data and modelling to framework level, and delve into the current challenges and solutions in the field. We will cover topics including training, evaluation and inference.

Talk Title: ProxyLM: Predicting Language Model Performance on Multilingual Tasks via Proxy Models

Presenter:
En-Shiun Annie Lee, Assistant Professor, Ontario Tech University | Kusei Uemura, Undergraduate, Ontario Tech University | David Anugraha, Undergraduate, Ontario Tech University | Jeremy Bradbury, Professor, Ontario Tech University

About the Speaker:
Annie En-Shiun Lee is an assistant professor at OntarioTech University and the University of Toronto (status-only). Her goal is to make language technology as inclusive and accessible to as many people as possible. She runs the Lee Langauge Lab (L^3) with research focusing on language diversity and multilinguiality. Professor Lee’s research has been published in Nature Digital Medicine, ACM Computing Survey, ACL, SIGCSE, IEEE TKDE, and Bioinformatics. She serves as the demo co-chair for NAACL and has extensive experience transferring technology to industry. Previously she was an assistant professor (teaching stream) at the University of Toronto. She received her PhD from the University of Waterloo and was a visiting researcher at the Fields Institute and Chinese University of Hong Kong as well as worked as a research scientist in industry at VerticalScope and Stradigi AI.

David Anugraha is an undergraduate researcher at the University of Toronto, where he has focused on developing efficient methods for low-resource languages and multilinguality under the guidance of Assistant Professor En-Shiun Annie Lee. He has also assisted Assistant Professor Maryam Dehnavi in investigating methods around large language models compression.

Kosei Uemura is an NLP student researcher at the University of Toronto, specializing in low-resource languages and cross-lingual transfer learning. He has developed state-of-the-art models in African languages with a model size of 7b. With experience in training LLMs from scratch at UTokyo’s Matsuo-Iwasawa Lab and fine-tuning large language models at Spiral.AI, specializing in personality injection, Kosei is dedicated to advancing AI capabilities.

Talk Track: Research or Advanced Technical

Talk Technical Level: 3/7

Talk Abstract:
Performance prediction is a method to estimate the performance of Language Models (LMs) on various Natural Language Processing (NLP) tasks, mitigating computational costs associated with model capacity and data for fine-tuning. Our paper introduces ProxyLM, a scalable framework for predicting LM performance using proxy models in multilingual tasks. These proxy models act as surrogates, approximating the performance of the LM of interest. By leveraging proxy models, ProxyLM significantly reduces computational overhead on task evaluations, achieving up to a 37.08× speedup compared to traditional methods, even with our smallest proxy models. Additionally, our methodology showcases adaptability to previously unseen languages in pre-trained LMs, outperforming the state-of-the-art performance by 1.89× as measured by root-mean-square error (RMSE). This framework streamlines model selection, enabling efficient deployment and iterative LM enhancements without extensive computational resources.

What You’ll Learn:
It may be worth exploring the usage of smaller and cheaper to fine-tune language models to gauge the performance of bigger and more expensive language models.

Talk Title: Torch.func: Functional Transforms in PyTorch

Presenter:
Shagun Sodhani, Tech Lead, Meta

About the Speaker:
I am a tech lead at FAIR (AI research at Meta), where I lead a team of researchers to train large-scale foundation models for developing neuromotor interfaces. My long-term research goal is to develop lifelong learning agents that can continually improve as they make decisions in the real world

Talk Track: Research or Advanced Technical

Talk Technical Level: 4/7

Talk Abstract:
This talk will focus on using functional transforms in PyTorch using the torch.func module. We will walk through usecases where torch.func improves the use of PyTorch APIs. Common examples include computing per-sample-gradients, vectorizing functions and creating ensemble of models. We will also talk about gotchas to look out for when using torch.func.

What You’ll Learn:
The audience will learn how to use functional transforms in PyTorch for usecases like computing per-sample-gradients, vectorizing functions and creating ensemble of models. They will also learn about gotchas to look out for when using torch.func.

Talk Title: Exploring the Frontier of Graph Neural Networks: Key Concepts, Architectures, and Trends

Presenter:
Ankit Pat, Lead Machine Learning Applied Scientist, Genesys

About the Speaker:
Ankit Pat is a Lead Machine Learning Applied Scientist at Genesys, specializing in leading and contributing to applied Machine Learning research with a strong emphasis on product-centric approaches. He has over 9 years of industry experience and 4 years of academic research in Machine Learning (ML) and Artificial Intelligence (AI). He holds a Master’s degree in Computer Science, specializing in AI, from the University of Waterloo, and both a Bachelor’s and Master’s in Mathematics and Computing from the Indian Institute of Technology, Kharagpur.

Ankit has authored over 10 patents and published 4 research papers at leading international conferences, including AAAI.

Talk Track: Research or Advanced Technical

Talk Technical Level: 5/7

Talk Abstract:
In today’s data-driven world, the relationships and connections within data are as crucial as the data itself. Graph Neural Networks (GNNs) have emerged as a groundbreaking technology that leverages these relationships to uncover insights and drive innovation across various domains, from social network analysis to drug discovery. This talk, will delve into the fundamentals of GNNs, exploring their unique ability and versatility to model complex data structures through graph representations. We will discuss the core principles, architectures, and applications of GNNs, providing a comprehensive overview of how they can transform your approach to data analysis and problem-solving. Whether you’re a data scientist, researcher, or industry professional, this talk will equip you with the knowledge to harness the full potential of GNNs in your work.

What You’ll Learn:
– Basics of Graph Structures and Graph Theory
– Fundamental Concepts, Key Components, and Architecture of GNNs
– Real-World Applications of GNNs Across Various Domains
– Advanced GNN Techniques: Including Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and more.
– Emerging Trends and Future Directions in Graph Neural Networks

Talk Title: Extending PyTorch for Custom Compiler Targets

Presenter:
Arash Taheri-Dezfouli, Compiler Engineer, Groq

About the Speaker:
Arash is a technical lead on Groq’s Compiler team, focusing on front-end integrations with PyTorch, ONNX, Jax and other ML/HPC frameworks. He received his masters and undergraduate degrees from the University of Toronto and has worked on Compiler technologies for Machine Learning/AI workloads for the last 7 years.

Talk Track: Research or Advanced Technical

Talk Technical Level: 5/7

Talk Abstract:
Groq has delivered the world’s first LPUs, focused on LLM inference and deep learning acceleration. However, to support inference specific accelerators, metadata on the PyTorch graph, such as custom or unsupported data types, are often required to improve performance of the model. This metadata is non-trivial to get through PyTorch’s graph export systems (e.g. TorchScript + ONNX, torch.compile, torch.export). In order to maximize inference efficiency for custom hardware targets, we present a generalizable technique that allows users to annotate PyTorch code at different granularities with arbitrary information. PyTorch model annotations can be a simple and powerful means to adjust the mapping of the workload to accelerators, yet maintain the semantics of the PyTorch inference graph. Our technique allows easy injection of information into a PyTorch graph at the Python level and easy recovery of the information and semantics during downstream ingestion. We demonstrate how to use this technique to modify existing PyTorch models in place to enable custom data-types and persist precision information through PyTorch into our compiler.

What You’ll Learn:
A generalizable technique that allows users to annotate PyTorch code at different granularities with arbitrary information and modify existing PyTorch models in place to enable custom data-types and persist precision information through PyTorch into Groq compiler.

Talk Title: Revolutionizing Fraud Prevention: Harnessing AI and ML to Safeguard Banking from Fraud

Presenters:
Angela Xu, Director, Risk Control and Fraud Analytics, CIBC | Kemi Borisade, Senior Fraud Data Analyst, CIBC

About the Speakers:
Angela Xu brings over 15 years of strategic data analytics experience in premier financial institutions to the Toronto Machine Learning Summit Conference. As a seasoned technical expert and strategic thinker, Angela has demonstrated success in developing and implementing innovative strategies. With a Master’s degree in Statistics from the Georgia Institute of Technology in Atlanta, USA, and another Master’s degree in Computer Science from China, Angela possesses a diverse skill set that she leverages to drive initiatives to tangible results.

Currently leading the Risk Control & Fraud Analytics team at CIBC, Angela focuses on regulatory breach reporting and fraud strategies for secured and unsecured lending products such as mortgages, loans, and lines of credit. Her leadership is characterized by a commitment to generating innovative ideas, influencing stakeholders, and delivering real value to both her organization and its clients.

Passionate about leveraging cutting-edge technologies to solve complex problems, Angela is dedicated to applying the latest advancements in machine learning and data analytics to add value to her company and enhance the experiences of its clients.

Talk Track: Business Strategy or Ethics

Talk Technical Level:  3/7

Talk Abstract:
In 2023, the Canadian Anti-Fraud Centre reported staggering losses of over CAD $550 million due to fraudulent activities, underscoring the urgent need for advanced security measures. At CIBC, we confront the dynamic challenges of this evolving landscape head-on by embracing cutting-edge tools, technologies, and methodologies.
Our journey is marked by formidable obstacles, including the limitations of rule-based fraud strategies, the delicate balance between sales and risk mitigation, inadequate tools for documentation validation, and the pressing demand for rapid fraud assessment. To address these challenges, our team embarked on a transformative path, leveraging next-generation self-learning Machine Learning models supplemented with custom thresholds. This approach enhances fraud detection capabilities, minimizes false positives, optimizes sales strategies, and fortifies client protection.
Furthermore, through strategic partnerships, we’ve embraced solutions such as Optical Character Recognition (OCR) to streamline documentation validation processes. Exploring the integration of graph databases, Natural Language Processing (NLP), and foundational models, we aim to unlock new frontiers in fraud prevention.
The culmination of our efforts heralds a new era in security, where the synergy of advanced AI and ML technologies promises unparalleled efficiency and efficacy in combating fraud. Join us as we unveil the future of fraud prevention in Canadian banking.

Additional notes

Dear Organizers and Evaluators,

I hope this letter finds you well. Over the years, I have had the privilege of attending the Toronto Machine Learning Summit Conference, and each time, I have found immense value in the exchange of ideas and the learning opportunities it provides. It has been a platform where I have personally benefited and grown in my understanding of machine learning and its applications.

This year, I am excited to contribute to the conference by sharing insights into the latest trends and technologies in fraud detection within financial institutions. My presentation aims to raise awareness among the audience about the critical importance of fraud prevention measures for both institutions and individuals alike. By exploring the advancements in machine learning and artificial intelligence, I hope to inspire discussions on innovative strategies to safeguard company assets and personal finances.

Fraud prevention is a pressing concern in today’s interconnected world, and I believe that through collaboration and knowledge-sharing at events like the Toronto Machine Learning Summit Conference, we can collectively work towards more effective solutions. I am eager to engage with fellow attendees, exchange perspectives, and explore new avenues for leveraging technology in the fight against fraud.

What You’ll Learn:
After attending this presentation, you will gain a comprehensive understanding of the prevailing fraud challenges within the financial industry. You will also acquire foundational knowledge of next-generation near real-time self-learning Machine Learning models, along with insights into their fundamental concepts. Additionally, you’ll explore advanced cutting-edge technologies utilized in fraud detection, equipping you with valuable insights into the evolving landscape of financial security.

Talk Title: GenAI: A New Renaissance in Product Development

Presenter:
Emerson Taymor, SVP, Design, InfoBeans

About the Speaker:
Emerson Taymor is a serial entrepreneur, investor, and currently the SVP of Design at InfoBeans as well as the creator of multiple hospitality brands in New York City. He co-founded the digital product studio, Philosophie, where he worked with enterprise executives and startup founders to launch over 300 digital products before it was acquired by InfoBeans in 2019.

In the digital world, Emerson works with leaders to help them unstick their big digital ideas through rapid experimentation and making.

In the physical world, he helped create one of the hottest speakeasy concepts in the East Village and one of the top rated cocktail bars in Brooklyn.

He loves weaving together these two worlds, exploring new parts of the world and being a sports fanatic.

Talk Track: Business Strategy or Ethics

Talk Technical Level:  1/7

Talk Abstract:
Drawing inspiration from the Renaissance, a time of explosive cultural and economic growth, we explore how GenAI is poised to revolutionize product development. I’ll explore its groundbreaking impact on global collaboration, user research, and product development process. I will also highlight potential caveats and pitfalls. This talk promises an action-packed look at GenAI’s role in shaping a new era of human-centric and economically impactful product development, ensuring you’re equipped to experiment in this modern renaissance.

What You’ll Learn:
– Understand quickly why GenAI is going to have such a profound impact on the world
– Learn specific ways that you can leverage GenAI when working with global teams
– Gain tool kits for integrating GenAI in your user research process to help you go faster
– Discover new tools that can improve all stages of the product development process to maximize speed
– Understand specific caveats and pitfalls that should be avoided when thinking about GenAI in the product process

Talk Title: Successfully Integrating Ai in Your Strategy and Business Operations – Lessons Learnt from Investing

Presenter:
Patrick Tammer, Senior Investment Director, Scale AI

About the Speaker:
Patrick Tammer is a Senior Investment Director and Policy Advisor to the Canadian Government at Scale AI, Canada’s global AI innovation cluster. He currently manages a $125M portfolio of AI industry innovation projects. He is also an AI researcher at the Harvard Kennedy School. Prior to his current role, he spent 4 years as a strategy consultant with BCG. LinkedIn

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
Drawing from a portfolio of over 100 AI and big data projects, I aim to share actionable guidance on how businesses can harness AI to drive innovation, efficiency, and competitive advantage. Attendees will learn how to:
1. Navigate the AI Landscape: I will present findings from Scale AI’s flagship report “”The State of AI in Canada”” (https://www.scaleai.ca/aiatscale-2023/) to provide a comprehensive overview of how Canada compares globally in AI advancements.
2. Identify and Collaborate with Ecosystem Partners: I will provide strategies for identifying the right partners across academia, startups, and AI solution providers to foster innovation and growth.
3. Structure Successful AI Initiatives: Sharing lessons learned from Scale AI’s extensive project portfolio, I will outline how to effectively structure internal AI initiatives for maximum impact.
4. Develop AI Talent: Insights on crafting a forward-thinking AI talent strategy will be discussed, enabling organizations to build essential in-house capabilities.
5. Access Non-Dilutive Funding: Information on leveraging government non-dilutive funding to de-risk investments in AI technologies will be highlighted, offering a pathway to innovative project financing.

Additional notes

While our project portfolio is cross-industry, I am happy to tailor my presentation to specific industries of interest

What You’ll Learn:
Why? What’s the knowledge gap:
The session addresses the critical gap of integrating cutting-edge AI and big data technologies into mainstream business operations. It aims to equip leaders with the knowledge and tools necessary to navigate the complexities of AI adoption and to leverage these technologies for strategic advantage.

Learning Format and Audience Engagement Details:
The session is designed to be a concise, high-impact presentation lasting 15-30 minutes. It will include a combination of case study insights, strategic frameworks, and interactive Q&A, crafted to engage a diverse audience of C-suite executives, IT professionals, and strategic decision-makers.

Target Audience:
Tailored for senior decision-makers, this presentation will benefit those looking to effectively deploy AI and big data technologies to reshape their business landscapes. It promises valuable insights for anyone involved in technology strategy and implementation.

Talk Title: How Is GenAI Reshaping the Business?

Presenter:
Jaime Tatis, VP-Chief Insights Architect, TELUS

About the Speaker:
Jaime Tatis is a visionary and technology thought leader with strong business acumen and a proven track record of collaborating with both technical and non-technical teams to drive critical initiatives. Jaime is passionate about building and developing diversely skilled high-performance teams, growing future leaders and driving business efficiency through continuous improvement and innovation.

As the Chief Insights and Analytics Officer at TELUS, a world-leading technology company, Jaime works with partners across the TELUS family of companies leading the advancement of data, AI and analytics strategy and the company’s cultural shift to create cutting-edge customer technology solutions. By thoughtfully providing data insights and analytics, along with next-generation cloud-based architecture to enable world-class Artificial Intelligence and Machine Learning capabilities, Jaime is improving business outcomes and providing best-in-class customer experiences for TELUS.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
Generative AI offers transformative advantages across all sectors with unlimited possibilities. Audience will learn about AI application and how it enhances efficiency, fosters innovation, and elevates problem-solving with real examples.

What You’ll Learn:
Discover how generative AI is transforming the business by enabling unprecedented scalability and impact. The talk will delve into the essential elements for successful AI scaling, emphasizing the importance of robust data foundations and the alignment of leadership vision with operational execution.
– The necessity of an iterative approach and the pivotal role of scalable platforms in driving innovation and growth within organizations.
– Real-world examples showcasing successful AI scaling efforts that have delivered substantial value to businesses.

Talk Title: Connecting the Dots Between AI Ethics and Sustainability

Presenter:
Sasha Luccioni, AI and Climate Leader, Hugging Face | Monish Gandhi, Founder, Gradient Ascent Inc | Deval Pandya, Vice President of AI Engineering, Vector Institute

About the Speaker:
Dr. Sasha Luccioni is the AI & Climate Lead at Hugging Face, a global startup in responsible open-source AI, where she works on creating tools and techniques for quantifying AI’s societal and environmental costs. She is also a founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML).

Monish has been passionate about machine learning, AI, and model development for a long time: over ten years ago he built a computer vision based system that could help players win at pool billiards. He has also built models for everything from airplane landing gear systems to crowd behaviour. Over the past few years, he’s worked on almost 60 ML/AI projects and brought this passion and experience to help businesses thrive in the coming AI-powered world.

Monish founded Gradient Ascent (GA) – a trusted provider of AI products, services, and solutions for non-AI companies within financial services, technology, industrial, and other sectors. GA has also made investments in AI businesses and is a member of AngelOne. Previously, he held product management, professional services, technical management, and sales roles at a number of fast growing technology companies. Monish often speaks at events and writes about the role of AI in business.

He has a masters degree in Finance and Financial Law (University of London) and an undergraduate degree in Systems Design Engineering (with Dean’s Honours) from University of Waterloo. In his free time, he loves to read, cook, and play tennis. He is a Board Member at CycleTO.

Deval is the Vice President of AI Engineering at Vector Institute and is passionate about the role of digital technologies in accelerating energy transition and Energy equity, as well as about building machine learning teams and products for societal good.

He holds a Doctorate in Mechanical Engineering and a Masters in Aerospace engineering. Before joining Vector Institute, Deval was leading Data Science and Machine learning teams at Shell. While in that role, his work spanned across various domains, including predictive maintenance, GHG accounting, power value chain, nature-based solutions, biofuels, and hydrogen.

He is passionate about the role of digitalization in energy transition and was the co-founder of the Future Energy Lions network at Shell. Deval also serves as a Director on the technical steering committee of Moja Global, a not-for-profit, collaborative project that brings together a community of experts to develop open-source software under Linux Foundation used for country-level greenhouse gas accounting from the AFOLU sector.

Deval is on the task force for Digitalization in Energy at the United Nations Economic Commission of Europe (UNECE). He enjoys traveling and cooking in his free time.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
AI ethics and sustainability considerations have typically been considered separately : work that aims to estimate the carbon footprint of AI models does not typically address their contribution towards shifting the balance of power and amplifying inequalities, and that which aims to evaluate the societal impacts of AI models focuses on aspects such as bias and fairness consistently overlooks their water and energy use. In this panel, we will discuss how the two subjects are related and intertwined, especially in the context of generative AI technologies, which come with many challenges in terms of ethics and the environment.

What You’ll Learn:
– Key ethical challenges in AI (bias, fairness, representativity, copyright)
– Environmental impacts of AI (energy, water, natural resources)
– Current state of the art in research on both
– How to make informed trade-offs between potential benefits of (generative) AI technologies while remaining cognizant of their ethical and environmental impacts

Talk Title: GenAI for Productivity?

Presenter:
Mandy Wu, Senior Software Development Manager, Wealthsimple

About the Speaker:
Mandy is a Senior Software Development Manager at Wealthsimple, where she leads Machine Learning & Data Engineering. These teams provide a simple and reliable platform to empower the rest of the company to iterate quickly on machine learning applications, GenAI tools and leverage data assets to make better decisions. Previously, Mandy worked in the NLP space and as a data scientist.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
At Wealthsimple, we leverage GenAI internally to improve operational efficiency and streamline monotonous tasks. Our GenAI stack is a blend of tools we developed in house and third party solutions.

Today, roughly half of the company utilizes these tools in their day to day work. These are the lessons we learned in adoption, user behaviour and how to effectively leverage these tools to improve productivity.

What You’ll Learn:
– Impact of GenAI on internal productivity
– Strategies to drive adoption of GenAI tools (what worked, what didn’t work)

Talk Title: Deploying LLMs on Kubernetes Environments

Presenter:
Arthur Vitui, Senior Data Scientist Specialist Solutions Architect, Red Hat Canada

About the Speaker:
Arthur is a senior data scientist specialist solution architect at Red Hat Canada. With the help of open source software, he is helping organizations develop intelligent application ecosystems and bring them into production using MLOps best practices.

He has over 15 years of experience in the design, development, integration, and testing of large-scale service enablement applications.

Arthur is pursuing his PhD in computer science at Concordia University, and he is a research assistant in the Software Performance Analysis and Reliability (SPEAR) Lab. His research interests are related to AIOps, with a focus on performance and scalability optimization.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 4/7

Talk Abstract:
Learn how to deploy LLMs on Kubernetes environments and use them to enhance your intelligent applications ecosystem with chatbots to talk to your documentation or help you in operations management tasks such as anomaly detection

What You’ll Learn:
– Learn how to configure a Kubernetes environment, such as Red Hat OpenShift, to support the deployment of a Large Language -Model (applied case for hybrid environments).
– Use the deployed LLM to build a RAG based system to “”talk”” to your documentation (operations applied use case)
– Use the deployed LLM to spot and predict traffic anomalies for deployed and monitored applications (operations applied use case)

Talk Title: Unlocking the Potential of Data in the Aviation Industry

Presenter:
Reem Al-Halimi, AI Enterprise Architect, Navblue, An Airbus Company

About the Speaker:
Dr. Reem Al-Halimi is the AI Enterprise Architect at NAVBLUE, An Airbus Company. Through her role, she is responsible for re-envisioning flight operations products into smart products, making users’ workflow more efficient and their decisions more effective. Dr. Al-Halimi received her Ph.D. in Computer Science from the University of Waterloo in 2002. Since then, she has worked on a variety of machine learning projects that span many ML areas including generative AI, computer vision, Natural Language Processing, anomaly detection, and predictive models.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
The aviation industry generates enormous amounts of data. Yet, much of that data is underutilized, translating into value add opportunities in multiple areas, including machine learning and sustainability. In this talk, I will take the audience on a tour to visualize the data flow within the aviation ecosystem to appreciate the amounts of data produced and the vast potential that data holds for a more sustainable, less disrupted experience for passengers, airlines, and airports alike.

What You’ll Learn:
1. Understand the data that flows in the aviation ecosystem
2. Understand the value add this data can bring and the challenges faced by the industry to utilize it
3. Learn what to expect when working with data in a mature industry that is just starting to pivot towards more modern data infrastructures.

Talk Title: Generative AI for Financial Services

Presenters:
Patricia Arocena, Senior Director and Head, Generative AI Innovation Labs, RBC | John Bolton, Director of Engineering, Generative AI Innovation Labs, RBC

About the Speakers:
Patricia Arocena, Head, Generative AI Innovation Labs North America. Working within the Innovation and Technology organization, she is responsible for understanding emerging technologies in the Generative AI space and helping drive their adoption across the bank.

Patricia spearheaded a First-of-a-Kind Program to explore the application of Generative AI technologies including next-gen Large Language Models (LLMs), centered around business problems and in collaboration with business and functional partners. Recently, she was awarded the 2023 RBC Performance Conference Award and Leo Award for her contribution to advance innovation.

Prior to joining RBC, Patricia held leadership innovation positions at Tier-1 research institutions in Canada, PWC, and other banks where she helped create Data and AI-powered solutions for the Financial Services industry. She earned her PhD in Computer Science and MEng in Computer Engineering from the University of Toronto and has been published in numerous scientific journals.

Patricia lives in Toronto with her family and is an avid gardener when there is no snow on the ground.

John has spent his career building unique digital experiences with a focus on integrating emerging technologies into user-facing applications. He oversees the development of proof-of-concepts that leverage Generative AI to address business problems within RBC. He holds a MSc in Human Computer Interaction from Queen’s University.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
Traveler, there is no road, the road is made by walking. Join us for an expert talk that delves into the transformative power of Generative Artificial Intelligence (AI) and its impact to the Financial Services industry. We begin by presenting an overview of the new AI race’s key moments in the last year and how it is starting to shape our future. We then delve into emerging use cases in the industry, along with a discussion of challenges and opportunities to fuel the next wave of innovation in products and services. We conclude with a visionary outlook for the future of Financial Services, and what is yet needed to power enterprise adoption and growth, 24/7, 365 days a year.

What You’ll Learn:
The impact of Generative AI on the Financial Services Industry. Emerging use cases, challenges, and opportunities. Outlook on what is coming next.

Talk: GenAI Investing in 2024

Presenter:
Margo Wu, Lead Investor, Georgian

About the Speaker:
Margo Wu is a Lead Investor at Georgian, a growth stage investment fund focused on B2B software companies leveraging applied machine learning and artificial intelligence. In her role, she is involved in deal selection, due diligence, post investment support and board governance. Prior to joining Georgian, Margo was a Senior Product Manager at Amazon and previous to this, she co-founded a biotech company called Uma Bioseed and served as the Chief Operating Officer at OneSpout. She started her career in enterprise software consulting at Accenture. Margo completed a double degree in Environment and Business and Chemistry at the University of Waterloo and also earned an MBA at Cornell Johnston Graduate School of Management.

Talk Track: Business Strategy or Ethics

Talk Technical Level: 2/7

Talk Abstract:
An updated overview on the Gen Ai market landscape and investment activity, along with investor insights for fundraising.

What You’ll Learn:
An updated overview on the Gen Ai landscape and investment activity, as well as some investor insights for fundraising.

Talk Title: AI Governance: Accelerate Responsible, Transparent, and Explainable AI Workflows

Presenter:
Nassim Tayari, watsonx Canada Leader, IBM Canada

About the Speaker:
Nassim Tayari is a distinguished technology leader with over 15 years of experience in Data science and engineering management. Currently, she serves as watsonx Canada leader at IBM, overseeing a large cross functional team of AI engineers and Solutions Architects assist Canadian clients in adopting trusted generative AI within their organizations.

Prior to this role, Nassim held various leadership positions at Borealis AI and Royal bank of Canada. Her background also includes hands-on experience as a Data scientist. Nassim is a visionary technologist with a passion for harnessing the power of bleeding-edge technology to revolutionize businesses.
Nassim is passionate about creating innovative and impactful solutions that leverage the power of Data and AI, prioritizing teamwork, diversity, excellence, and strive to make a positive difference in the world through her work.
Nassim Tayari holds PhD in the applications of Machine Learning in Medical Imaging and have multiple publications and certifications in the field.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
The Hype around AI and the value it can offer and the concerns around how it can be implemented has reached a fever pitch in recent months. AI governance is not just a “nice to have” in today’s AI environment. It provides a level of organizational rigor and human oversight into how AI models are created and deployed.
While it doesn’t replace the MLOps processes that organizations have, it complements them with activities intended to strike the appropriate balance between the benefits and risks of AI. The focus of this talk is on a comprehensive platform agnostic end to end solution for managing life cycles and risk for both traditional predictive machine learning and new generative AI models.
watsonx.governance is an AI-driven, highly scalable governance, risk and compliance platform that can centralize siloed risk management functions within a single environment.

What You’ll Learn:
As part of this talk the audience would learn about importance of AI governance and how to ensure that these advanced technologies are used ethically, responsibly, and in a manner that benefits society as a whole.

The learners will be introduced into a governance framework that outlines the principles, policies, and practices for governing the development, deployment, and use of generative artificial intelligence (AI) systems.

Workshop: Building Reproducible ML Processes with an Open Source Stack

Presenter:
Iddo Avneri, VP Customer Success, lakeFS

About the Speaker:

Iddo has a strong software development background. He started his career in the army, where he served for 8 years, eventually heading the main software development school. Following his service, Iddo built technical teams for several startups in the Observability, Cloud and data spaces.

Prior to joining the lakeFS team Iddo built the technical enterprise field team at Turbonomic, from the ground up, as well as served as the Field CTO, and was the account executive for some of the company’s largest customers; up to the $1.9B IBM acquisition in 2021. At Treeverse, the company behind lakeFS, Iddo runs all customer engagements from sales to customer success.

Talk Track: Workshop

Talk Technical Level: 4/7

Talk Abstract:

Machine learning experiments consist of Data + Code + Environment. While MLFlow Projects are a great way to ensure reproducibility of Data Science code, it cannot ensure the reproducibility of the input data used by that code.

In this talk, we’ll go over the trifecta required for truly reproducible experiments: Code (MLFlow and Git), Data (lakeFS) and Environment (Infrastructure-as-code).

This talk will include a hands-on code demonstration of reproducing an experiment, while ensuring we use the exact same input data, code and processing environment as used by a previous run. We will demonstrate programmatic ways to tie all moving parts together: from creating commits that snapshot the input data, to tagging and traversing the history of both code and data in tandem.

What You’ll Learn:

This talk will demonstrate the progress we have made to actually making ML Processes, workflows, and data reproducibility truly possible through open source tooling.

Talk Title: Web Extraction With LLMs

Presenter:
Patrick Halina, Machine Learning Scientist, Pinterest

About the Speaker:
I lead the Content Mining team at Pinterest. We use ML to understand the webpages and extract useful information for our users.

Talk Track: Applied Case Studies

Talk Technical Level:  3/7

Talk Abstract:
Since the dawn of the internet people have scraped websites to build up datasets. How has that changed with the advent of LLMs? This talk will discuss our learnings in applying state of the art approaches to understanding webpages and extracting information. We’ll share lessons from parsing over 1 billion webpages per day at Pinterest.

What You’ll Learn:
– Overview of state of the art systems for extracting data from webpages
– Our experience in testing out different approaches, from GPT to open source LLMs to simple models
– Results from research into our own internal approaches to web extraction

Talk Title: Gen AI in Banking: Lessons Learned

Presenter:
Yannick Lallement, Chief AI Officer, Scotiabank

About the Speaker:
Yannick Lallement is the VP & Chief AI Officer at Scotiabank, where is works on developing the use of AI/ML technologies throughout the Bank. Yannick holds a PhD in artificial intelligence from the French National Institute of Computer Science. Prior to joining Scotiabank, Yannick worked on a series of AI/ML projects for different public and private organizations.

Talk Track: Applied Case Studies

Talk Technical Level:  2/7

Talk Abstract:
I will present Scotiabank’s Gen AI journey so far, from collecting ideas across the bank in an inventory all the way to our first use cases in production, and share what we learned along the way on how Gen AI applies to the industry (examples will be about banking, but lessons will be applicable across).

What You’ll Learn:
How Gen AI can effectively be useful, how to find the right use cases, how to deploy it at scale.

Talk Title: Optimizing Recommendations on Wattpad Home

Presenters:
Gayathri Srinivasan, Senior AI/ML Product Manager, Wattpad | Abhimanyu Anand, Data Scientist, Wattpad

About the Speakers:
Gayathri Srinivasan is an accomplished AI product manager at Wattpad, specializing in personalized rankings and recommendations. With over 7 years of diverse product management experience across various industries, including startups, scale-ups, and enterprises, she brings a wealth of knowledge and expertise to her role.

Abhimanyu is a Data Scientist at Wattpad, an online social storytelling platform, where he leads the development of recommender systems for content recommendations. He holds an M.Sc. in Big Data Analytics from Trent University, with a specialization in natural language processing. He has developed and implemented robust AI solutions throughout his career across diverse domains, including internet-scale platforms, metals and mining, oil and gas, and e-commerce.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
At Wattpad, the world’s leading online storytelling platform, recommendation systems are pivotal to our mission of connecting readers with the stories they love. The Home Page is the primary gateway to Wattpad’s diverse content and experiences. As the platform has evolved, we’ve introduced new content types and classes of stories to meet various business objectives, such as user engagement, merchandising, and marketing. This evolution necessitated recalibrating our homepage recommender system to effectively balance multiple business goals. In this talk, we will discuss how we integrated these objectives into the home recommender stack using probabilistic algorithms derived from the domain of reinforcement learning. Additionally, we will share the challenges we encountered during this transition, such as data sparsity and the cold start problem, along with insights into our development of novel graph neural network architectures tailored for recommendation systems and the new datasets we developed to overcome these hurdles.

What You’ll Learn:
The audience will learn about:
1. How Recommendation systems are used at scale for content recommendation.
2. Challenges associated with recommendation systems like data sparsity, balancing multiple objectives, etc.
3. How we solved these problems at Wattpad using novel graph-based models, multi-objective ranker, etc.

Talk Title: Ask the Graph: How Knowledge Graphs Helps Generative AI Models Answer Questions

Presenter:
Michael Havey, Senior Solutions Architect, Amazon Web Services

About the Speaker:
Mike Havey is a Senior Solutions Architect for AWS with over 25 years of experience building enterprise applications. Mike is the author of two books and numerous articles.

Talk Track: Applied Case Studies

Talk Technical Level: 3/7

Talk Abstract:
Generative AI has taken the world by storm. The Retrieval Augmented Generation (RAG) pattern has emerged as an effective way to incorporate your organization’s data to provide current, accurate answers to questions that users ask a Large Language Model (LLM) Knowledge Graphs make RAG even more accurate and helpful. The secret sauce: relationships! I describe what a Knowledge Graph is, why it has long been a great database to answer questions, and how it can help an LLM using a pattern called Graph RAG. I present examples of Graph RAG in action from industries such as finance and healthcare.

What You’ll Learn:
You will learn how a Large Language Model (LLM) benefits from Retrieval Augmented Generation (RAG) to provide current, accurate answers to user questions grounded in your organization’s data. A graph database helps make RAG more accurate because it maintains relationships between business objects. You will learn what a Knowledge Graph is and how to build Graph RAG on a Knowledge Graph. As a takeway, you will see the business benefit of Graph RAG, the value of more accurate, helpful answers!

Talk Title: Building and Evaluating Prompts on Production Grade Datasets

Presenters:
Bhuvana Adur Kannan, Lead – Agent Performance & ML Platform, Voiceflow | Yoyo Yang, Machine Learning Engineer, Voiceflow

About the Speakers:
Bhuvana heads the Conversational Agent performance and ML platform at Voiceflow, aiming to improve conversational agent performance for customers. She has prior experience working on enterprise data systems for major Canadian banks and financial institutions.

Yoyo is a Machine Learning Engineer at Voiceflow, a conversational AI company. She works on various facets of machine learning systems, from model training and prompt tuning to backend architecture and real-time inference. Yoyo has been working in ML and Data Science for the past five years. She is committed to transforming ideas into robust, scalable solutions and continually pushing the boundaries of what’s possible.

Talk Track: Applied Case Studies

Talk Technical Level: 6/7

Talk Abstract:
Constructing prompts per task can be challenging given the many unknowns of running them in production. In this talk, we’ll cover how we created several production style datasets for two types of LLM tasks to productize prompt based features. We’ll walk through the methodology, techniques and lessons learned from developing and shipping prompt based features. The techniques in this talk will be widely applicable, but focused on conversational AI.

What You’ll Learn:
How to approach dataset creation, iterate on prompts and measure success of releases in production.

Talk Title: Dynamic Huff's Gravity Model with Covariates for Site Visitation Prediction

Presenter:
Winston Li,Founder, Arima

About the Speaker:
Winston is the founder of Arima, a Canadian based startup working on synthetic consumer data. Our flagship product, the Synthetic Society, is a privacy-by-design, individual level database that mirrors the real society. Built using trusted sources like census, market research, mobility and purchase patterns, it contains 50,000+ attributes for 40+335 million individuals across Canada & US and enables advanced modelling at the most granular level.

Aside from Arima, Winston is an avid researcher in data mining, where he publishes in outlier detection and synthetic data generation. Winston is the co-creator of PyOD, one of the most widely used open-source Python toolbox for data mining.

Talk Track: Applied Case Studies

Talk Technical Level: 3/7

Talk Abstract:
Huff’s Gravity Model is a widely used statistical analysis for predicting the probability of a consumer visiting a location, as a function of distance, attractiveness, and the available alternatives. First formulated by David Huff in 1963, it has been widely used in marketing, economics, retail research and urban planning.

In this presentation, we introduce Dynamic Huff’s Gravity Model with Covariates, a technical enhancement to the traditional gravity model where additional covariates like mobility and population behavioural data are included to further increase model accuracy and explanability. We cover the model formulation, examples of additional datasets that can be included, and a case study with a Canadian retailer to demonstrate how the model can provide business value.

What You’ll Learn:
– What is the Huff Gravity Model
– What business problems can the model solve
– What companies/use cases are ideal for this technique

Talk Title: Why Real-Time Event Streaming Pattern is Indispensable for an AI Native Future

Presenter:
Debadyuti Roy Chowdhury, VP Product, InfinyOn

About the Speaker:
Deb leads product management at InfinyOn a distributed streaming infrastructure company. Deb’s career since 2006 spans across IT, server administration, software and data engineering, leading data science and AI practices, and product management in HealthTech, Public Safety, Manufacturing, and Ecommerce.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
Will our favourite applications be able to deliver AI powered experiences without basic data quality and reliable infrastructure?

Distributed event streaming is the backbone of real-time analytics, insights, and intelligence. Stateful stream processing enables bounded and unbounded stream processing patterns that delivers rich datasets, streamline explainability, and delightful consumer experiences.

Event streaming patterns are useful for data collection, data enrichment, data profiling and aggregation, and measuring drift and explainability.

In this talk I will share my experience with AI use cases and event streaming.

What You’ll Learn:
Event driven architecture and event streaming patterns are a big power up for applied AI:
– Architecture patterns that are currently in use in IoT and B2B SaaS eCommerce and FinTech
– Tangible benefits of event driven architecture for AI in production in terms of:
– Operational Cost
– Infrastructure Overhead
– Developer Productivity and velocity

Talk Title: LLMs for Revolutionizing Credit Risk Assessment

Presenters:
Josh Peters, Data Science Manager, Wealthsimple | Kiarash Shamsi, ML Researcher, Wealthsimple

About the Speakers:
Josh is a Data Science Manager at Wealthsimple. For the last 2 years, he has led the development of the company’s first credit risk models and created the data pipelines to support new credit products.

Prior to Wealthsimple, Josh spent 7+ years working on Data Science problems in the insurance, banking and fraud spaces through his time at Accenture and Airbnb.

Josh’s educational background is in Finance, Statistics and Computer Science.

Kiarash Shamsi is a Ph.D. student at the University of Manitoba, and currently working as financial ML researcher at Wealthsimple. He has published as a first author in conferences such as NeurIPS, ICLR, and ICBC. His research interests are Large language models, temporal graph learning, graph neural networks, topological data analysis, and blockchain data analysis and systems.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
The session on leveraging Large Language Models (LLMs) in revolutionizing credit risk assessment will commence with an introduction to the potential impact of LLMs on the finance industry. This will be followed by an exploration of the key benefits of LLM integration, including the enhancement of risk assessment accuracy, the utilization of alternative data sources, and the automation of credit processes. The discussion will delve into real-life case studies and examples to illustrate the practical applications of LLMs in credit risk assessment. Additionally, the session will address potential challenges and ethical considerations surrounding the use of LLMs in this context. The talk will conclude with insights on the future of credit risk assessment with LLMs, leaving room for engaging discussions and Q&A.

What You’ll Learn:
Improved Risk Assessment
LLMs can analyze vast amounts of unstructured data, such as financial records, transaction histories, and market trends, to provide more comprehensive and accurate risk assessments. By processing and generating human-like text, LLMs can uncover insights and patterns that traditional credit risk models may miss.

Enhanced Contextual Understanding
LLMs can provide a deeper contextual understanding of borrower profiles and financial data. They can analyze text-based information, like loan applications and customer interactions, to gain a more holistic view of a borrower’s creditworthiness.

Handling Nonlinear Relationships
LLMs can capture complex nonlinear relationships within credit data, enabling them to make more accurate credit risk predictions compared to traditional linear models.

Improved Fraud Detection
LLMs can analyze transaction patterns and identify anomalies that may indicate fraudulent activities, enhancing an institution’s ability to detect and prevent fraud.

Automating Credit Risk Processes
LLMs can automate the credit risk analysis process, generating credit approvals, pricing recommendations, and repayment terms. This can lead to faster decision-making, reduced manual effort, and minimized human error.

Leveraging Alternative Data
LLMs can integrate alternative data sources, such as social media profiles and online behavior, to assess credit risk for borrowers with limited or no credit history. This allows for more comprehensive and inclusive credit risk evaluations.

Enhancing Portfolio Management
By analyzing market trends and customer behavior, LLMs can assist in optimizing credit portfolios, improving risk management, and enhancing overall lending strategies.
Overall, the integration of LLMs in credit risk assessment has the potential to revolutionize the industry by providing more accurate, efficient, and inclusive credit risk evaluations, ultimately leading to better lending decisions and improved financial outcomes.

Talk Title: Rapid Deployment of LLMs into Production: Strategies and Insights

Presenters:
Irena Grabovitch-Zuyev, Staff Applied Scientist, PagerDuty Suchita Venugopal, Senior Machine Learning Engineer, PagerDuty  

About the Speakers:

Irena Grabovitch-Zuyev is a Staff Applied Scientist at PagerDuty, specializing in Data Mining, Machine Learning, and Information Retrieval. She earned her Master of Science in Computer Science from the Technion – Israel Institute of Technology. Her thesis, titled “Entity Search in Facebook,” delved into the realm of Information Retrieval in Social Networks.

In her current role, Irena plays a significant role in developing the PagerDuty Copilot Assistant, leveraging Generative AI to streamline PagerDuty Operation Cloud. Additionally, she has contributed to the development of the Auto-Pause Incident Notifications feature, an important component of AIOps aimed at noise reduction. This feature employs a prediction model to automatically pause notifications for transient alerts, resolving them within minutes.

Before joining PagerDuty, Irena spent five years at Yahoo Research as a senior member of the Mail Mining Team. During this time, she focused on Automatic Extraction and Classification using machine learning algorithms. Her work was deployed in production within Yahoo’s mail backend, processing hundreds of millions of messages daily.

In addition to her professional accomplishments, which include presenting papers at top conferences and filing patents, Irena finds immense fulfillment in her role as a mother to her three children.

Suchita Venugopal is a Senior Machine Learning Engineer at PagerDuty, where she specializes in implementing Generative AI features and leveraging Large Language Models (LLMs). She holds a Master of Science in Big Data from Simon Fraser University in Vancouver, Canada.

In her current role at PagerDuty, Suchita is instrumental in integrating LLM-based features, such as the PagerDuty Copilot assistant and customer support chatbots that utilize Retrieval-Augmented Generation (RAG). She also contributes to the development of Machine Learning (ML) models used in PagerDuty AIOps, helping to automate and optimize IT operations.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
In the fast-paced domain of generative AI, the deployment of Large Language Models (LLMs) into production settings introduces a distinctive blend of challenges and opportunities. This presentation will detail our experience in incorporating LLMs into our product line within a challenging two-month period, a move motivated by the transformative potential of generative AI for enhancing our offerings. We navigated through various obstacles, such as constrained planning timelines, shifting requirements, the management of diverse stakeholder expectations, adaptation to emerging technologies, and the coordination of simultaneous workflows. These hurdles highlighted the pivotal role of data science and machine learning engineering teams in facilitating LLM integration, emphasizing the importance of security, testing, monitoring, and the pursuit of alternative solutions.

We will share the systematic approach we employed for identifying LLM use cases, validating their feasibility, engineering effective prompts, and crafting a comprehensive testing strategy. Additionally, we will introduce the LLM Service, a custom solution designed to ensure secure and efficient LLM access. This service underscores the significance of robust security protocols, the protection of customer data, the flexibility to switch LLM models to optimize performance for specific use cases, and the provision of redundancy in case of provider outages. Our discussion aims to illuminate how our expedited LLM deployment signifies the dawn of a new era in AI-driven product innovation.

What You’ll Learn:
In this talk, you will learn the effective processes and strategies that enabled the rapid deployment of Large Language Models (LLMs) into our production environment. We will share key takeaways from our journey, highlighting what aspects are non-negotiable, such as robust security measures and the protection of customer data, alongside insights into what strategies yielded the best outcomes. Additionally, we will openly discuss the mistakes we encountered along the way, offering valuable lessons to help you avoid similar pitfalls in your own LLM deployment projects. This session promises a candid look into the challenges and triumphs of integrating generative AI into product offerings at speed.

Talk Title: Growing your ML Career via Technical Writing and Speaking: Tips and Lessons

Presenter:
Susan Chang, Principal Data Scientist, Elasticsearch

About the Speaker:
Susan Shu Chang is a principal data scientist at Elastic, which powers search around the world. Previously, she built machine learning at scale in the fintech, social, and telecom industries. She is the author of Machine Learning Interviews, published by O’Reilly.

Talk Track: Applied Case Studies

Talk Technical Level: 2/7

Talk Abstract:
This talk goes through the process of how I started writing about technical topics, leading to fast career growth, building an audience, speaking and keynoting at conferences, and eventually even a book deal. The talk aims to show you how to start, and how to gain career growth opportunities with writing and speaking.

This talk is based on the personal experience of the author of the O’Reilly book, Machine Learning Interviews.

What You’ll Learn:
– How to start writing technical content, such as a blog
– How to build an audience through speaking and writing
– Finding opportunities to speak and be published

Talk Title: Upskilling Your Full-Stack Development Team in Machine Learning

Presenter:
Kathryn Hume, Vice President, Digital Channels Technology, RBC | Nijan Giree, Director Mobile Development, Digital, RBC | Arup Saha, Director, Android Development, RBC | Alex Lau, Senior Director, Android and Mobile Services Development, RBC

About the Speaker:
Kathryn Hume is the Vice President of Digital Channels Technology at the Royal Bank of Canada. She is responsible for the software engineering and development of the mobile and online banking platforms at RBC. Alongside her primary role at RBC, she is a board member for AI-Redefined and CanadaHelps, and an advisor for Lytical Ventures. She has led multiple technology teams at RBC, including the personal investments engineering team and the Borealis AI machine learning team. Prior to joining RBC, Kathryn held leadership positions at Integrate.ai and Fast Forward Labs, where she helped over 50 Fortune 500 organizations develop and implement AI programs. She is a widely respected author and educator on technology and innovation, with work appearing at TED, HBR, the Globe and Mail. She has given guest lectures on AI at Harvard, MIT, and the University of Toronto, and served as a visiting professor at the University of Calgary Faculty of Law. She holds a PhD in Comparative Literature from Stanford University and speaks seven languages.

Nijan Giree is an experienced developer with a keen interest in artificial intelligence. He is committed to devising practical AI-driven solutions to complex challenges.

Arup is Experienced Director Of Development with a demonstrated history of working in the banking industry. Skilled in Android Development, Machine Learning, AI, Deep Learning, NLP, CI/DevOps, Google Cloud Platform, Database, SOA, Websphere, Enterprise Architecture, and Agile Methodologies.

Alex Lau is the Senior Director of Android and Mobile Services development at RBC. He leads a passionate team of software engineers that is responsible for developing RBC’s Android Banking and Avion applications. Alex has been developing mobile solutions for the past 10 years. Prior to joining RBC, he led development teams at TD, Good Technology/BlackBerry and IBM building a variety of products, from consumer facing applications to enterprise tools like MDM and BYOD containers. Alex holds a Master and Bachelor of Mathematics degree in Computer Science from the University of Waterloo.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
As the machine learning landscape evolves, it’s becoming easier for traditional software development teams to build and implement models themselves. Generative AI further democratizes ML implementation, with traditional tasks like classification or summarization being possible with well-engineered prompts.

In this session, we will walk through how an Android native development team built skills to implement various kinds of machine learning models themselves. We’ll share lessons learned along the way, and tips for scaling and democratizing machine learning across the enterprise.

What You’ll Learn:
Practical tips for building machine learning skills in a software development team
Techniques to scale knowledge of a new domain across a large team
Lessons learned in the nuances of applying various neural network techniques and how to overcome obstacles in production at scale for a 10-million client user base

Talk Title: AI As An Engineering Discipline

Presenter:
Rajat Arya, Co-Founder, XetHub

About the Speaker:
XetHub Co-founder with over 20 years of industry experience in various roles including support, engineering, product, and sales. Some highlights include co-designing the ML Data Platform at Apple, shipped 1st version of Microsoft OneDrive, being an early engineer in AWS RDS (scaled database instances 5K-100K), and being the 1st employee of the ML startup GraphLab/Dato/Turi.

Talk Track: Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
The field of Artificial Intelligence (AI) has transformed over the last few decades, and has evolved from a deeply mathematical and theoretical discipline into a software engineering discipline. For the first time in history, AI is truly accessible. However, an open question is what is the right way to use AI? What are the engineering best practices around AI? In this talk we first briefly discuss how modern AI came about and how it has changed the rules of Machine Learning development. Then we will try to establish some new guidelines and engineering principles. that will allow you to cut through the noise of AI tooling, and assist in determining what is most effective for your tasks. And we will close with some examples you can apply to your next ML or AI project.

What You’ll Learn:
In this talk you should learn how to approach AI/ML development with software engineering principles, recognize that AI development is simply ML development, and have a set of tools and processes to adopt in your next ML project.

Talk Title: AI-ready Data Infrastructure for Real-time Sensor Data Analytics on the Edge

Presenter:
Christian P. Calderon, MLOps & Deployment Engineer, Zapata AI

About the Speaker:
Christian Picón Calderón is an accomplished MLOps Engineer at Zapata Computing, Inc. with over 3 years of experience in designing and implementing machine learning platforms based on open-source technology. Christian has been instrumental in designing and architecting systems for machine learning applications for various client projects.

Christian’s responsibilities also include deploying models into production, managing core aspects like retraining, monitoring, exposure as a service, drift detection, version control, and auditability. He collaborates closely with data engineers to define data pipelines and requirements for the models and takes ownership of the technical interview component for Software, Machine Learning, and Data Engineering positions, as well as technical onboarding.

Talk Track: MLOps & Infrastructure

Talk Technical Level: 5/7

Talk Abstract:
AI and ML use cases involving real-time sensor data in edge environments present numerous challenges, including real-time data cleaning and transformation, merging with historical data, and running power-hungry models on-premises. Using Zapata AI’s race strategy analytics work with Andretti Global as a case study, Christian Picón Calderón will share practical lessons in building the data architecture necessary to support real-time data analytics use cases on the edge, exploring parallel use cases across industries.

What You’ll Learn:
How to build an AI-ready data infrastructure to overcome the challenges in deploying real-time sensor data analytics applications to the edge.

Talk Title: Optimizing Personalized User Experience: In-session Recommendations Across E-commerce Verticals

Presenter:
Tina Shen, Machine Learning Engineer, Loblaw Digital | Charles Zhu, Machine Learning Engineer, Loblaw Digital

About the Speaker:
Tianshu (Tina) Shen is a dedicated Machine Learning Engineer at Loblaw Digital, with over 2 years of experience in the e-commerce industry and over 4 years specializing in recommender systems. Tina holds a Master’s in Applied Science from the University of Toronto. Her research primarily focused on conversational recommender systems and she has published more than five conference papers during her master’s studies.

At Loblaw Digital, Tina has been instrumental in designing and building diverse machine learning solutions that deliver personalized recommendations across several e-commerce platforms such as Joe Fresh and Real Canadian Superstore. Her work significantly enhances user experiences through personalized and real-time product recommendations across these various verticals.

Through her presentation “Optimizing Personalized User Experience: In-session Recommendations Across E-commerce Verticals”, Tina aims to share valuable insights acquired from hands-on application of advanced methodologies at Loblaw Digital – inspiring peers and attendees towards innovative strides within today’s dynamic e-commerce landscape.

Charles Zhu is a Machine Learning Engineer on the P13n Recommendations Team at Loblaw Digital. He primarily works in machine learning productionization and ML pipeline governance for the company’s Helios Recommender Engine. Prior to working at Loblaw, Charles worked with the City of Toronto analyzing transportation safety data, and in astrophysics as a software engineer.

Talk Track:
Applied Case Studies

Talk Technical Level: 4/7

Talk Abstract:
This talk presented by Loblaw Digital, delves into the nuanced domain of personalized recommendation systems within e-commerce. The presentation will initiate with an examination of user behaviours on different platforms, as well as existing solutions, identifying their limitations and illuminating the pathway towards innovative solutions that significantly enhance user engagement and experience.

Reflecting on the increasing demand for personalization in today’s competitive e-commerce landscape, various use cases of in-session recommendations will be discussed. These are effectively executed across our well-known platforms such as Loblaws (Grocery), Real Canadian Superstore (Grocery), Shoppers Drug Mart (Beauty, Personal Care & Health Products), and Joe Fresh (Fashion Industry).

In highlighting our in-house in-session recommendation model detailed at https://arxiv.org/abs/2401.16433), we show how it takes account multiple item/user data to offer effective personalized suggestions across different shopping situations. The model’s flexibility allows it to optimize customer experiences by addressing the complexity of user behaviour in practice, such as 1) co-existence of multiple shopping intentions, 2) multi-granularity of such intentions, and 3) interleaving behaviour (switching intentions) in a shopping session.

An overview of the system design behind this model is presented alongside its impressive performance results – providing you a clear picture of practically applicable solutions enhancing shopper experiences on digital platforms. Additionally, we will briefly share the ongoing online evaluations and discuss anticipated improvements moving forward. By attending this session by Loblaw Digital, attendees can expect to gain comprehensive insights into tailoring recommendations that accurately capture customer needs in real-time across various verticals within e-commerce.

What You’ll Learn:
1. Comprehensive understanding of the current landscape and limitations of personalized recommendation systems in e-commerce.

2. Insight into the practical application and benefits of in-session recommendations across various platforms in fashion, grocery, beauty, personal care and health products.

3. Detailed understanding of Loblaw Digital’s innovative in-house adaptable recommendation model that leverages multiple item/user data to generate effective personalized, real-time recommendations.

4. Technical insights into our Neural Pattern Associator (NPA), a pioneering item-association-mining model that employs vector quantization to encode common user intentions as quantized representations.

5. An overview on how NPA permits of users’ shopping intentions through an attention-driven lookup during the reasoning phase, resulting in coherent and self-interpretable recommendations.

6. A sneak peek into ongoing online evaluations and potential improvements for enhancing e-commerce experiences via tailored real-time product suggestions.

Talk Title: Deploying and Evaluating RAG pipelines with Lightning Studios

Presenter:
Rob Levy, Staff Engineer, Lightning AI

About the Speaker:
Robert Levy is a Staff Engineer at Lightning AI, a NYC-based startup. At Lightning AI, the team is focused on defining the next-generation development and deployment pipeline for other AI and ML operators. Specifically, Rob’s work focuses on the data ingress and egress needs of applied models and the optimization of model deployments.

Talk Track:
Applied Case Studies

Talk Technical Level: 5/7

Talk Abstract:
Learn how to use Lightning Studios to quickly deploy AI agents and accelerate your evaluation of RAG pipelines.

What You’ll Learn:
Learn how to use Lightning Studios to quickly deploy AI agents and accelerate your evaluation of RAG pipelines.

Talk Title: RAGs in Production: Delivering Impact Safely and Efficiently

Presenters:
Everaldo Aguiar, Senior Engineering Manager, PagerDuty | Wendy Foster, Data Products Leader, Shopify | Margaret Wu, Senior Data Scientist, Advanced Analytics and AI, CIBC | Christopher Parisien, Senior Manager, Applied Research, NVIDIA

About the Speakers:
Everaldo started his Data Science journey as a Data Science for Social Good Fellow at the Center for Data Science and Public Policy at UChicago. Today he is a Senior Engineering Manager at PagerDuty where he leads both the Data Science and Data Engineering teams, and a faculty member at the Khoury College at Northeastern University. Prior to that he was a Data Science Lead at Shopify’s Growth organization. Everaldo is originally from Brazil and Seattle has been home to him for 6 years.

Wendy with over 10 years of experience leading data organizations at scale, Wendy Foster divides her time between data start-up advising and applied data science education; supporting the next wave of data leaders and innovation in this rapidly evolving space.

Christopher Parisien is a Senior Manager of Applied Research at NVIDIA, leading the development of NeMo Guardrails, a toolkit for safety and security in Large Language Models. Chris holds a PhD in Computational Linguistics from the University of Toronto, where he used AI models to explain the strange ways that children learn language. During his time in industry, he helped build the first generation of mainstream chatbots, developed systems to understand medical records, and served as Chief Technology Officer at NexJ Health, a patient-centred health platform. His current focus at NVIDIA is to bring trustworthy language models to large enterprises.

Talk Track: Panel Discussion

Talk Technical Level: 4/7

Talk Abstract:
“Urgent” and “unplanned” are among the least favorite words in any productive team’s dictionaries. Unexpected issues disrupt roadmaps, delay important work, lead to burnout, and hurt customer trust.

Here at PagerDuty we’ve been leveraging AI to help our customers experience fewer incidents and resolve the ones they do have faster. This often involves giving them streamlined access to information they need about our product, their individual setups, and an efficient way for them to get answers to complex answers on the fly.

As technologies evolved and we rolled out our generative AI infrastructure, RAGs became an excellent candidate for those use-cases. They allow for an easy-to-automate process of building “”knowledge bases”” and using those to power powerful chat-like applications, but productionalizing them in a safe manner is often more challenging than building these RAG systems themselves.

In this panel we’ll discuss some of these challenges, how we’ve been tackling them, as well as existing areas of open research we’re excited to pursue in the coming months.

What You’ll Learn:
Attendees will learn how to tackle some common (and uncommon) challenges that come with bundling RAG models into their own products. We’ll cover a few corner cases that were completely unexpected as well as automation processes that we designed to ensure that complex parts of our systems could be maintained with minimal engineering effort.

Workshop: The Gap From Prototype to Production: Lessons Learned from Implementing Applications with LLMs

Presenter:
Ian Yu, Machine Learning Engineer, Groupby Inc.

About the Speaker:
Machine Learning Engineer with 3 years of experience in data enrichment and production-grade ML systems in the eCommerce industry, with 2 Generative AI projects in production and 2 LLM open research contribution

Talk Track: Workshop

Talk Technical Level:  5/7

Talk Abstract:
2023 was a good year for prototyping LLM-based applications, but 2024 is a great year for productionizing them. However, going into production, there are many unforeseen questions and challenges. These include decisions between managed solutions and custom implementations, balancing the rigour of experimentation with the speed of application development, ensuring maintainability post-deployment, and evaluating and optimizing systems. Common issues also include LLM output inconsistencies at scale that were not apparent during prototyping, tightly coupled systems that are hard to pivot or modify, unclear evaluation objectives, and misaligned product-user fit.

In this workshop, we will address these questions and challenges at the implementation level, including:
– Strategically align LLMs with product design and application logic
– Empirical tips on designing LLM chaining
– What incorporating LLMs with non-LLMs in a system looks like
– Application maintainability post-deployment
– Discuss various touch points for build vs. buy, such as prompt versioning, orchestration, vector databases
– Trends and predictions spurred by increased focus on production work

This workshop also includes a little hands-on work to demonstrate bite-sized system

What You’ll Learn:
Necessary knowledge and considerations when going from LLM application prototype to production

Prerequisite Knowledge (if required)
Entry-level understanding of prompting, vector databases, and system design

Workshop: A Practitioner's Guide To Safeguarding Your LLM Applications

Presenter:
Shashank Shekhar, Co-Founder, Dice Health

About the Speaker:
Shashank Shekhar is a machine learning engineer and researcher. He is the co-founder of Dice Health, a startup dedicated to developing automation tools for healthcare providers, aimed at accelerating the delivery of care. Dice Health is currently a part of the Next AI 2024 incubator cohort.

Prior to founding Dice Health, Shashank worked at Meta AI Research pursuing research on scaling laws, self-supervised computer vision, and foundation models. Before that, he was at the Vector Institute doing research on explainable AI, visual reasoning, and dynamic neural networks. His first foray into machine learning happened at the Indian Institute Of Science’s Data Science department, where he worked on projects involving visual question answering, entity re-identification, and object detection.

Shashank holds a master’s degree in Computer Engineering from the University of Guelph and a bachelor’s degree in Electronics Engineering from the Indian Institute of Technology Dhanbad. His extensive ML experience also includes collaborations with NEXT-AI Toronto, Layer6 AI Toronto, Shell R&D Center Bangalore, HyperWorks Imaging Bangalore, and Samsung Research Institute Delhi.

Shashank, alongside his collaborators from Meta AI, Stanford and Tübingen University, was a recipient of the Best Paper Award at NeurIPS 2022 – the premier machine learning research conference. He was also a Vector Institute Scholar in AI in 2019, awarded to exceptional graduate student in Ontario in the field of AI.

Talk Track: Workshop

Talk Technical Level: 6/7

Talk Abstract:
In this workshop, participants will learn essential techniques for enhancing the reliability and security of their Large Language Model (LLM) applications. Despite their powerful capabilities, LLMs often face challenges such as generating inconsistent outputs, straying off-topic, exposing sensitive data, etc. This workshop is tailored to give practitioners a broad understanding of current LLM limitations, as well as providing them with tools to address these limitations by generating structured outputs, ensuring topic relevance, mitigating hallucinations, and safeguarding company data.

This workshop will be tailored towards data scientists, ML engineers, and anyone involved in developing or managing LLM applications in the real world who is looking to enhance the robustness of their LLM systems. There will be hands-on programming components using open-source tools to reinforce the concepts covered during the workshop.

What You’ll Learn:
Participants of this workshop will gain a comprehensive understanding of:

Generating Structured Outputs for LLMs: Learn to generate, validate, and, if necessary, regenerate outputs that align with interoperability requirements, ensuring that LLM applications interact seamlessly with existing codebases.

Topic Relevance: Master techniques to ensure that LLMs consistently produce content that is relevant and on-topic, adheres to company brand guidelines, and focuses on delivering the desired user experience.

Hallucination Mitigation: Develop strategies to reduce the risk of LLMs generating inaccurate or misleading information. This includes setting programmable guardrails and providing a reliable ground truth data source for content generation.

Data Leakage Prevention: Understand and implement best practices to protect sensitive information, such as health records and financial details, from being inadvertently exposed by LLMs.

Safety Guardrails Implementation: Learn to establish robust safety guardrails to minimize risks like unauthorized model behavior (“”jailbreaks””), ensure safe interactions with third-party applications, and manage operational costs related to LLM use.

Prerequisite Knowledge (if required)
Basic knowledge of large language models either via APIs such as OpenAI ChatGPT, or Anthropic Claude, or via local models such as Meta Llama, or Mixtral.

Workshop: Kùzu - A Fast, Scalable Graph Database for Analytical Workloads

Presenter:
Prashanth Rao, AI Engineer, Kùzu, Inc.

About the Speaker:
Prashanth is an AI engineer at Kùzu based in Toronto. In recent years, he’s worked with numerous databases and data modeling paradigms, with a focus on data engineering, analytics and machine learning to power a variety of applications. He enjoys engaging with the data community and blogging @ thedataquarry.com in his spare time.

Talk Track: Workshop

Talk Technical Level: 5/7

Talk Abstract:
In this session, we will introduce Kùzu, a highly scalable, extremely fast, easy-to-use, open source embedded graph database designed for analytical query workloads. Users who are familiar with DuckDB in the SQL world will find Kùzu to be a refreshingly familiar graph analogue. A number of state-of-the-art methods from graph database research are highlighted.

The workshop will include a practical component that showcases how simple and easy-to-use Kùzu is for data scientists and engineers. We will demonstrate popular use cases by transforming a relational dataset (in the form of tables) into a knowledge graph, run Cypher queries on the graph, analyze the dataset using graph algorithms, and train a simple graph neural network using PyTorch Geometric to compute node embeddings and store them in the graph database for downstream use cases. We will end by summarizing how these methods can help build advanced RAG systems that can be coupled with an LLM downstream.

Additional notes

In addition to the workshop where we go into the hands-on concepts of knowledge graphs and how to use them, we’d very much like to have a 30-minute talk that introduces the idea of Kùzu and how it’s different from other graph databases, and the core innovations under the hood. If the organizers feel that the content is better separated into two parts (a separate talk on the main stage and the workshop with the practical component), that’s perfectly fine as well. For this reason, I’ve opted for any of the available presentation times.

What You’ll Learn:
1. What are knowledge graphs
2. The characteristics of competent graph database systems
3. How to work with graphs on real-world data
4. How to query a graph in Cypher
5. How to run graph algorithms for graph data science
6. How to do graph machine learning

The core message that attendees will take away is this: There are times when modeling tabular/relational data as a graph is necessary and useful, e.g., to obtain a more object-oriented model over your records or find indirect connections/paths between the entities in the data. In such cases, using an open source, embedded graph database like Kùzu is a simple and low-barrier-to-entry option to analyze the connected data at a much greater depth via graph data structures.

Prerequisite Knowledge (if required)
Basic Python programming skills (all the background for what knowledge graphs are, and how to work with a graph database will be provided to users who are new to the world of graphs).

Workshop: Optimizing Large Language Model Selection for Efficient GenAI Development

Presenters:
Royal Sequeira, Machine Learning Engineer, Georgian | Aslesha Pokhrel, Machine Learning Engineer, Georgian | Christopher Tee, Software Engineer, Georgian

About the Speakers:
Royal is a Machine Learning Engineer and is a part of Georgian’s R&D team. He helps Georgian’s portfolio companies develop product features and in accelerating GTM strategies. His expertise is in Natural Language Processing with broader experience in Multimodal Machine Learning, Computer Vision, Information Retrieval, and Reinforcement Learning. He has publications in top-tier conferences such as ACL, EMNLP, WSDM, and SIGIR. In the past, he has worked at Ada Support, LG Toronto AI Lab, and Microsoft Research India. In 2018, he founded Sushiksha, a mentorship organization that has mentored hundreds of medical and engineering students across rural India with both technical and soft skills. In his free time, he reads books, likes to learn new languages, and enjoy a hot chai with his friends.

Aslesha is a Machine Learning Engineer at Georgian, helping portfolio companies leverage ML solutions in various business use cases. She graduated from the University of Toronto with a Master’s in Applied Computing and a Bachelor’s in Computer Science and Physics. Her background includes significant research in deep learning and representation learning in various data modalities including language, time series and tabular data, which she now applies to driving innovation and efficiency.

Christopher is a tech enthusiast with a passion for code optimizations, efficient machine learning solutions and MLOps. He has extensive experience in building high-performance machine learning pipelines and orchestrating the lifecycle of machine learning models.
During his spare time, Christopher enjoys cycling and skiing.

Talk Track: Workshop

Talk Technical Level: 5/7

Talk Abstract:
When developing a Generative AI use case, developers face a variety of choices, particularly with the proliferation of foundational and open-source models. The decision process to choose the suitable large language model (LLM) for a given use case, however, may involve fine-tuning, crafting tailored prompts, cost considerations, and evaluations, which can become cumbersome without a modular design approach. In this workshop, we will explore various tools such as DSPy and frugalGPT to help pick the best LLM given the usecase. This will be a hands-on session focusing on the practical applications.

What You’ll Learn:
The main goal of the workshop is to provide attendees with a hands-on experience with tools such as DSPy and frugalGPT to build modular pipeline to choose the best LLM for specific needs based on performance, cost, and scalability.

Prerequisite Knowledge (if required)
Install the following libraries before the workshop: ollama, dspy-ai, frugalGPT

Workshop: Getting started with Generative Text and Fine-tuning LLMs in Hugging Face

Presenter:
Myles Harrison, Consultant & Trainer, NLP from Scratch

About the Speaker:
Myles Harrison is a data scientist and trainer at NLP from scratch, an independent provider of training and consulting services in natural language processing, large language models, and Generative AI. Previously he led the data science program at a tech bootcamp, delivering thousands of hours of in-person and online training in data, machine learning, and Big Data, and worked for many years in consulting in the data & AI space. You can find out more at mylesharrison.com.

Talk Track: Workshop

Talk Technical Level: 3/7

Talk Abstract:
If you’re new to working with LLMs hands-on in code, this is the session for you! In this introductory workshop, you’ll get working with Hugging Face and the transformers library for generating text from LLMs and applying performance efficient fine-tuning methods to a generative text model.

Whether you are starting from near zero or have some prior knowledge of large language models, this workshop is your jumping off point to get you started on working with LLMs.

What You’ll Learn:
-​​ Define large language models (LLMs) and the transformer architecture; understand the history of their development, key concepts, and high-level details of their structure and function
-​​ Be introduced to Hugging Face and the transformers library and see applications thereof in code, using LLMs for generative text
-​​ Define fine-tuning and understand the motivation for applying it to existing large language models for generative text
-​​ Applying fine-tuning to a generative text model using the Hugging Face transformers library and a text dataset
-​​ Be introduced to approaches for working with large language models efficiently on consumer hardware: performance efficient fine-tuning (PEFT) and model quantization

Prerequisite Knowledge (if required)
Basic python & knowledge of LLMs

Workshop: Building an Open-Source Agentic RAG Application with Llama 3

Presenters:
Greg Loughnane, Co-Founder, AI Makerspace | Chris Alexiuk, Co-Founder & CTO, AI Makerspace

About the Speakers:
Dr. Greg Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their [AI Engineering Bootcamp]. Since 2021 he has built and led industry-leading Machine Learning education programs. Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher. He loves trail running and is based in Dayton, Ohio.

Chris Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their [AI Engineering Bootcamp]. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

Talk Track: Workshop

Talk Technical Level: 4/7

Talk Abstract:
This year, people and companies aim to build more complex LLM applications; namely, ones that leverage context and reasoning. For applications to leverage context well, they must provide useful input to the context window, through direct prompting or search and retrieval. To leverage reasoning is to leverage the Reasoning-Action pattern, and to be “agentic” or “agent-like.”

The tool with the largest community-building LLM applications is LangChain. LangChain v0.2, the latest version of the leading, incorporates LangGraph directly, the engine that powers stateful (and even fully autonomous) agent cycles.

In this session, we’ll break down the concepts and code you need to understand and build the industry-standard agentic RAG application, from soup to nuts.

What You’ll Learn:
– A review of the basic prototyping patterns of GenAI, including Prompt Engineering, RAG, Fine-Tuning, and Agents

– Understand agents and agentic behavior as a pattern of reasoning and action

– The big ideas behind giving agents access to tools through [Function Calling]

– Why giving agents access to tools enables search and retrieval (e.g., RAG)

– Why you should choose specific open-source LLMs and embedding models over others

– The core ideas and constructs you’ll need to build RAG applications with LangChain

– How synthetic data can be created and evolved using the [Evol-Instruct] method

– How you should think about evaluating the output of RAG and Agentic systems

Prerequisite Knowledge (if required)
– Working knowledge of how to run Machine Learning Python code in Jupyter Notebooks
– Practical knowledge of how to use an interactive development environment with version control so that you can engage with our public GitHub repo. To test yourself, complete [The AI Engineering Bootcamp Challenge]

Workshop: LLMs for Leaders & Senior Product Managers

Presenter:
Hamza Farooq, CEO & Founder, Traversaal.ai

About the Speaker:
Hamza Farooq is a visionary entrepreneur, seasoned AI expert, and educator committed to shaping the future of technology. As the founder and CEO of Traversaal.ai, he leads the charge in revolutionizing the landscape of AI-driven solutions. With a rich background as a research scientist at Google and a distinguished adjunct professor at leading institutions like Stanford and UCLA, Hamza brings a wealth of experience to his role. Passionate about democratizing access to advanced AI capabilities, he spearheads initiatives aimed at empowering businesses and individuals to harness the power of AI for innovation and growth. Hamza’s unwavering dedication to excellence, coupled with his deep expertise in building large-scale ML models, positions him as a driving force behind Traversaal.ai’s mission to pioneer the next frontier of AI technology.

Talk Track: Workshop

Talk Technical Level: 1/7

Talk Abstract:
Become a leader in Gen AI Transformation. Learn from real-world case studies on how you can drive innovation within your company through LLM

What You’ll Learn:
In this workshop, participants will get a deep understanding of what Gen AI can do for them. We will do small exercises which provide with the knowledge and skills to leverage Large Language Models (LLMs) to build an AI strategy for your larger organization.

The course will take you through the entire process of:

💡 How to identify GenAI opportunities, within existing or brand-new products

🧪 Testing your idea through user research, quickly and without wasting a ton of $$

🏗️ Building your MVP, just enough so that you can get early traction

💰 Selling your idea, across customers, VCs, and internal stakeholders

Prerequisite Knowledge (if required)
None, just basic knowledge of Chat GPT

Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs

Presenter:
Yizhi Yin, PhD, Senior Solutions Engineer, Neo4j

About the Speaker:
Yizhi Yin is a Sr. Solutions Engineer at Neo4j. Having earned a Ph.D. in Genetics from the University of Iowa and several years of postdoctoral research in cancer biology, Yizhi transitioned into data science at Tamr, working with large and small organizations to adopt machine learning to enhance data quality. In her current role at Neo4j, Yizhi collaborates closely with clients to start their journey with graph databases and graph data sciences.

Talk Track: Workshop

Talk Technical Level: 5/7

Talk Abstract:
Join us for an immersive workshop to explore integrating Large Language Models (LLMs) with knowledge graphs using Neo4j, a leader in graph database and graph analytics, with a Retrieval Augmented Generation (RAG) approach.

RAG has become the industry standard to overcome LLM limitations like reliance on generic data and lack of enterprise specific information.

Neo4j’s dynamic and interconnected data structure makes it ideal for this, enabling accurate and contextually rich responses. Neo4j’s graph data science algorithms provide additional insights into the knowledge graph, enhancing its capacity to derive new relationships and uncover hidden patterns.

This hands-on session will guide you through building a personal messenger application for personalized product recommendations using RAG patterns.

By combining the capabilities of LLMs with Neo4j knowledge graph and graph data science, you will gain practical insights into creating sophisticated, intelligent GenAI applications tailored to your enterprise use case.

What You’ll Learn:
– High level introduction of Knowledge Graph
– Connecting and exploring knowledge graph with Cypher
– Implementing vector search with Neo4j
– Analyzing vector search results with graph patterns
– Enriching search results using graph data science methods
– Next steps in GenAI application development and participate in an interactive Q&A session

Prerequisite Knowledge (if required)
Basic Python programming skills

Workshop: Leveraging Large Language Models to Build Enterprise AI

Presenters:
Rohit Saha, Machine Learning Scientist, Georgian | Kyryl Truskovskyi, Founder, ML Engineer, Kyryl Opens ML | Benjamin Ye Machine Learning Scientist, Georgian | Angeline Yasodhara Machine Learning Engineer, Georgian

About the Speakers:
Rohit is a Machine Learning Scientist on Georgian’s R&D team, where he works with portfolio companies to accelerate their AI roadmap. This includes scoping research problems to building ML models to moving them into production. He has over 5 years of experience developing ML models across Vision, Language and Speech modalities. His latest project entails figuring out how businesses can leverage Large Language Models (LLMs) to address their needs. He holds a Master’s degree in Applied Computing from the University of Toronto, and has spent 2 years at MIT and Brown where he worked at the intersection of Computer Vision and domain adaptation.

Kyryl is a seasoned ML professional, currently based in Canada. With a rich 9-year background in ML, he has evolved from hands-on coding to architecturing key ML business solutions.

Ben is a Machine Learning Engineer at Georgian, where he helps companies to implement the latest techniques from ML literature. He obtained his Bachelor’s from Ivey and Master’s from Penn. Prior to Georgian, he worked in quantitative investment research.

Angeline is a Machine Learning Scientist at Georgian, collaborating with companies to accelerate their AI product development. Before joining Georgian, she was a research assistant at the Vector Institute, working at the intersection of machine learning and healthcare, focusing on explainability and causality. From explainability, time series, outlier detection to LLMs, she applies the latest techniques to enhance product differentiation.

Talk Track: Workshop

Talk Technical Level:  3/7

Talk Abstract:
Generative AI is poised to disrupt multiple industries as enterprises rush to incorporate AI in their product offerings. The primary driver of this technology has been the ever-increasing sophistication of Large Language Models (LLMs) and their capabilities. In the first innings of Generative AI, a handful of third-party vendors have led the development of foundational LLMs and their adoption by enterprises. However, development of open-source LLMs have made massive strides lately, to the point where they compete or even outperform their closed-source counterparts. This competition presents an unique opportunity to enterprises who are still navigating the trenches of Generative AI and how best to utilize LLMs to build enduring products. This workshop (i) showcases how open-source LLMs fare when compared to closed-source LLMs, (ii) provides an evaluation framework that enterprises can leverage to compare and contrast different LLMs, and (iii) introduces a toolkit to enable easy fine-tuning of LLMs followed by unit-testing (https://github.com/georgian-io/LLM-Finetuning-Toolkit)

What You’ll Learn:
By the end of this workshop, learn how to create instruction-based datasets, fine-tune open-source LLMs via ablation studies and hyperparameter optimization, and unit-test fine-tuned LLMs.

Prerequisite Knowledge (if required)
Python + Familiarity with concepts such as prompt designing and LLMs

Workshop: Uncertainty Quantification with Conformal Prediction: A Path to Reliable ML Models

Presenter:
Mahdi Torabi Rad, President, MLBoost

About the Speaker:
Mahdi Torabi Rad, Ph.D. is a computational scientist, engineer, self-trained software developer, mentor, and YouTube content creator with over 10 years of experience in developing mathematical, statistical, and machine-learning models, as well as computer codes to predict complex phenomena. He has published in top-tier journals of Physics, Engineering, and ML and has extensive experience as an ML Lead in various DeepTech startups. Mahdi is also the YouTuber behind the channel MLBoost, known for its popular videos on ML topics, including Conformal Prediction, which have garnered tens of thousands of views in less than a year.

Talk Track: Workshop

Talk Technical Level: 5/7

Talk Abstract:
In today’s high-stakes applications ranging from medical diagnostics to industrial AI, understanding and quantifying uncertainty in machine learning models is paramount to prevent critical failures. Conformal prediction, also known as conformal inference, offers a practical and robust approach to create statistically sound uncertainty intervals for model predictions. What sets conformal prediction apart is its distribution-free validity, providing explicit guarantees without relying on specific data distributions or model assumptions.

This hands-on workshop reviews the core concepts of conformal prediction, demonstrating its applicability across diverse domains such as computer vision, natural language processing, and deep reinforcement learning. Participants will gain a deep understanding of how to leverage conformal prediction with pre-trained models like neural networks to generate reliable uncertainty sets with customizable confidence levels.

Throughout the workshop, we’ll explore practical theories, real-world examples, and Python code samples, including Jupyter notebooks for easy implementation on real data. From handling structured outputs and distribution shifts to addressing outliers and models that abstain, this workshop equips attendees with the tools to navigate complex machine learning challenges while ensuring model reliability and trustworthiness.

What You’ll Learn:
– What sets conformal prediction apart from other methods of uncertainty quantification?
– The principles and theory behind conformal prediction for uncertainty quantification in machine learning
– Techniques for creating statistically rigorous uncertainty sets/intervals using conformal prediction
– How to apply conformal prediction to pre-trained machine learning models, such as neural networks, for reliable uncertainty quantification
– Hands-on experience with implementing conformal prediction in Python using libraries like scikit-learn and NumPy
– Examples showcasing the application of conformal prediction in diverse domains such as financial forecasting, natural language processing (NLP), and computer vision

Prerequisite Knowledge (if required)
Basic understanding of machine learning concepts, including model training and evaluation.
Familiarity with Python programming and libraries such as NumPy, Pandas, and scikit-learn

Talk: From Chaos to Control: Mastering ML Reproducibility at Scale

Presenter:
Amit Kesarwani, Director, Solution Engineering, lakeFS

About the Speaker:
Amit heads the solution architecture group at Treeverse, the company behind lakeFS, an open-source platform that delivers a Git-like experience to object-storage based data lakes.

Amit has 30+ years of experience as a technologist working with Fortune 100 companies as well as start-ups. Designing and implementing technical solutions for complicated business problems.

As an entrepreneur, he launched a cloud offering to provide Data Warehouse as a Service. Amit holds a Master’s certificate in Project Management from George Washington University and a bachelor’s degree in Computer Science and Technology from Indian Institute of Technology (IIT), India. He is the inventor of the patent: System and Method for Managing and Controlling Data

Talk Track: Virtual Talk

Talk Technical Level: 6/7

Talk Abstract:
Machine learning workflows are not linear, where experimentation is an iterative & repetitive to and fro process between different components. What this often involves is experimentation with different data labeling techniques, data cleaning, preprocessing and feature selection methods during model training, just to arrive at an accurate model.

Quality ML at scale is only possible when we can reproduce a specific iteration of the ML experiment–and this is where data is key. This means capturing the version of training data, ML code and model artifacts at each iteration is mandatory. However, to efficiently version ML experiments without duplicating code, data and models, data versioning tools are critical. Open-source tools like lakeFS make it possible to version all components of ML experiments without the need to keep multiple copies, and as an added benefit, save you storage costs as well.

What You’ll Learn:
In this talk, you will learn how to use a data versioning engine to intuitively and easily version your ML experiments and reproduce any specific iteration of the experiment.

This talk will demo through a live code example:
• Creating a basic ML experimentation framework with lakeFS (on Jupyter notebook)
• Reproducing ML components from a specific iteration of an experiment
• Building intuitive, zero-maintenance experiments infrastructure

Workshop: Building a Production-Grade Document Understanding System with LLMs

Presenters:
Ville Tuulos, Co-Founder, Outerbounds | Eddie Mattia, Data Scientist, Outerbounds

About the Speakers:
Ville Tuulos is a co-founder and CEO of Outerbounds, a developer-friendly ML/AI platform. He has been developing infrastructure for ML and AI for over two decades in academia and as a leader at a number of companies. At Netflix, he led the ML infrastructure team that created Metaflow, a popular open-source, human-centric foundation for ML/AI systems. He is also the author of a book, Effective Data Science Infrastructure, published by Manning.

Eddie Mattia is a data scientist with a background in applied math, and experience working in a variety of customer-facing and R&D roles. He currently works at Outerbounds to help customers and open-source practitioners build machine-learning systems and products.

Building AI developer tools and many applications on top of them!

Talk Track: Workshop

Talk Technical Level: 3/7

Talk Abstract:
LLMs can be used to process troves of unstructured text automatically, e.g. to discover patterns, summarize and classify content, and enhance existing ML models through embeddings.

In this workshop, we will build a realistic document understanding system that reads live, large-scale data continuously from a data warehouse, queries state-of-the-art LLMs (cost-) efficiently, and uses the results to power various use cases.

The system is powered by open-source Metaflow and open models, so you can apply the blueprint easily in your own environment.

What You’ll Learn:
You will learn how to build and operate a realistic document understanding system powered by state-of-the-art LLMs.

Prerequisite Knowledge (if required)
Basic knowledge of Python

Workshop: AI Agents with Function Calling/Tool Use

Presenter:
Aniket Maurya, Developer Advocate, Lightning AI

About the Speaker:
Aniket, a Machine Learning – Software Engineer with with over 4 years of experience, demonstrating a strong track record in developing and deploying machine learning models to production.

Talk Track: Workshop

Talk Technical Level: 4/7

Talk Abstract:
Learn about Agentic workflows with LLM tool use. Generate structured JSON output and execute external tools/functions.

What You’ll Learn:
By the end of this workshop you will learn how to build AI Agents and make use of function calling with OpenAI and open-source LLMs

Prerequisite Knowledge (if required)
Python and LLM fundamentals

Talk: Generative AI Design Patterns

Presenter:
Krishnachaitanya Gogineni, Principal ML Engineer, Observe.AI

About the Speaker:

Krishna Gogineni is a Principal Engineer at Observe.AI, leading the company’s Generative AI stack. He specializes in integrating and productionizing large language models and other advanced architectures to solve product use cases, expertly balancing accuracy/quality with cost/latency. With a solid background in platform engineering and machine learning, Krishna excels in applying state-of-the-art research to industry use cases at scale, ensuring economic viability. Outside of work, he enjoys writing, attending local hackathons and startup events.

Talk Track: Research or Advanced Technical

Talk Technical Level: 3/7

Talk Abstract:
In this presentation, we delve into the expansive world of generative AI design patterns, selecting five pivotal examples to explore in depth: Retrieval Augmented Generation (RAG), Cluster Pulse, State Based Agents, Guard Rails, and Auto-Prompting. These patterns represent a subset of the broader spectrum of generative AI techniques, each offering unique insights into how we can enhance the capabilities and safety of AI systems. RAG provides a method for enriching AI responses with external data, Cluster Pulse fosters creativity in AI outputs, State Based Agents ensure AI actions are aligned with specific objectives, Guard Rails establish boundaries for AI behavior, and Auto-Prompting facilitates more dynamic and context-aware interactions with AI models.

The application of these patterns is demonstrated through the development of the Personalized K-8 Tutor, a project that showcases the synergistic potential of combining multiple generative AI design patterns. This educational tool leverages the strengths of each pattern to create a customized learning experience that adapts to the unique needs and preferences of individual students. By focusing on these five patterns, the presentation aims to provide attendees with a clear understanding of how generative AI can be harnessed to create innovative and impactful solutions, while also highlighting the vast array of other patterns waiting to be explored in the field of generative AI.

What You’ll Learn:
Understanding of three critical generative AI design patterns: Retrieval Augmented Generation (RAG) for enhancing AI responses with external information, State Based Agent for managing AI behavior, and Cluster Pulse for fostering AI creativity.
Insight into the practical application of these design patterns in building intelligent and adaptive AI systems.
Hands-on experience in integrating these patterns into a comprehensive project, the Personalized K-8 Tutor, showcasing their potential to revolutionize educational technology.
Appreciation of the importance of design patterns in structuring and optimizing generative AI solutions for real-world challenges.
Knowledge of how to leverage generative AI to create innovative, user-centric applications that push the boundaries of traditional software engineering.

Talk: Fuel iX: An Enterprise Grade Gen AI platform

Presenters:
Liz Lozinsky, Engineering Manager, Gen Ai Platform Team, TELUS | Sara Ghaemi, Senior Software Developer, Gen Ai Platform Team, TELUS

About the Speakers:
Liz is a Developer Advocate and Engineering Manager on the Platform Engineering team at TELUS. With a background in software development and a BASc from the University of Waterloo in Electrical Engineering with an option in Management Science, Liz leads a talented team focused on democratizing Gen AI for all. Known for her creativity, positivity, and a hint of whimsy, Liz approaches every challenge with enthusiasm and a spirit of adventure!

Sara is a Software Developer in the Gen AI Platform team at TELUS with background in both research and practical applications of software systems. She is one of the lead developers working on the Generative AI initiative at TELUS. She holds a Master’s degree in Software Engineering and Intelligent Systems from the University of Alberta for which she received the C.R. James Award for Best Master of Science Thesis award from the university. Sara is deeply passionate about leveraging her expertise to make technology more accessible and beneficial to all.

Talk Track: Applied Study Cases

Talk Technical Level: 4/7

Talk Abstract:
Sharing how TELUS enabled Gen AI for everyone internally through Fuel iX to get the most value out of the latest advancements in generative AI, while ensuring flexibility, control, privacy, trust and joy!

TELUS has been making incredible strides in AI and we’re at the forefront of scaling generative AI for our team members and customers. We’ve developed a suite of internal generative AI platforms and tools to empower our team members to safely experiment with this technology, fostering a culture of innovation and trust. With over 24,000 team members already utilizing our AI-powered tools in ways we never imagined, it’s clear that the potential for generative AI to enhance productivity and efficiency is immense. By automating repetitive tasks and providing valuable assistance, our AI tools enable team members to focus on innovation and problem-solving, ultimately driving positive change and progress.

What You’ll Learn:
– Building out enterprise grade Gen AI platforms
– The importance of responsible AI and ethical considerations in the development of Gen AI applications
– TELUS’s efforts in scaling generative AI for team members and customers
– The significant impact of AI tools in enhancing productivity and efficiency

Talk: Agentic AI: Unlocking Emergent Behavior in LLMs for Adaptive Workflow Automation

Presenter:
Patrick Marlow, Staff Engineer, Vertex Applied AI Incubator, Google

About the Speaker:
Patrick is a Staff Engineer on the Vertex Applied AI Incubator team, where he focuses on building tooling and reusable assets to extend Google’s cutting-edge LLM technologies. He specializes in the Conversational AI ecosystem, working with products such as Vertex Agents, Vertex Search and Conversation, and Dialogflow CX. Previously, he was an AI Engineer in Google’s Professional Services Organization. Prior to Google, he was the Principal Data Architect at Levelset, a construction technology company, specializing in NLP Data Pipelines and OCR tooling. Patrick also worked as the Director of Engineering and Data Science for Amelia.ai, a Conversational AI company, delivering chat and voice bots to Fortune 500 clients across the Banking, Hospitality, Entertainment, and Retail verticals.

Patrick studied Electrical Engineering at the University of Texas at Austin.
He is the author and maintainer of several Open Source works including CXLint, an automatic linter for Dialogflow CX, and the Dialogflow CX Scripting API (SCRAPI) which is utilized by developers worldwide to supercharge bot building and analysis in Dialogflow CX.

Talk Track: Applied Study Cases

Talk Technical Level: 4/7

Talk Abstract:
Explore the emergent capabilities of “agentic” AI, where agents combine LLMs, reasoning loops, and tools to tackle complex workflows beyond the capabilities of LLMs alone. This session examines techniques for fostering this intelligence, enabling agents to adapt and self-direct their actions for unparalleled workflow automation. Attendees will leave with a deeper understanding of agentic AI and strategies for its impactful implementation.

What You’ll Learn:
A few things that attendees can take away from this session are:
– How agentic AI involves goal-oriented behavior, adaptability, and a degree of autonomy within AI systems.
– How AI systems can continuously learn and improve their workflow management capabilities
– How to explore practical techniques for implementing agentic AI within their workflows
– Best practices for system design and architecture of Agentic AI systems

Talk: AI for AI-Scotiabank's Award-Winning ML Models

Presenter:
Narcisse Torshizi, Data Scientist/Data Science Manager, Scotiabank | Andres Villegas, Data Scientist Manager, Scotiabank

About the Speaker:
Narcisse Torshizi is an NLP and AI Data Scientist at Scotiabank, who has a PhD in Neurolinguistics. She has 10 years of experience in data and analytics and is specialized in the development of AI products and the related LLM trainings.

Andres Villegas is a trilingual (English, French, Spanish) Conversational AI Expert with over 7 years of experience in designing voice and chatbot interactions across various industries. With a Master’s degree in Engineering and a Professional Development Certificate in Data Science and Machine Learning, I have led multiple successful projects. Currently, I am part of the global Machine Learning and Artificial Intelligence group at Scotia Bank, where I have implemented the first customer-facing Gen AI feature and conducted extensive analytics to optimize chatbot performance. I am passionate about leveraging NLP, UX design, and automaton to drive digital transformation and enhance user interactions.

Talk Track:
Applied Study Cases

Talk Technical Level: 5/7

Talk Abstract:
A brief overview of four innovative models that power and improve a chatbot solution

Last year, Scotiabank was awarded the 2023 Digital Transformation Award by IT World Canada for our customer support chatbot. This achievement was made possible through the implementation of auxiliary AI models that helped the team develop the chatbot (“”AI for AI””). These auxiliary models enabled the automation of the conversations review, supported NLU training, and allowed for scalability as the adoption of the chatbot increased. Besides, we have recently leveraged LLMs for summarizing chatbot interactions when a chatbot session is handed over to an agent (when the chatbot cannot fulfil the customer’s request).

The chatbot solutions that we have developed and deployed is a result of combining various machine learning and statistical models. These models handle distinct aspects of natural language understanding, processing, and evaluation. Launching a new chatbot with no previous data puts immense pressure on the sustaining teams to detect, classify, and fix issues in the chatbot. In the absence of out of the box solutions the team came up with the concept of building auxiliary AI models to sustain the chatbot (AI for AI). We will describe the major features and achievements of four models that sustain our award-winning chatbot: Luigi, EVA, Peach and GenAI summarization.

Luigi is a machine learning model that takes the confidence threshold of the chatbot’s answers as either correct or incorrect. It uses a supervised learning approach to learn from the feedback of human reviewers and adjust the threshold accordingly. EVA is a machine learning classification model that processes customer inputs to predict their intent. It works in conjunction with Google Dialogflow. Peach is a natural language understanding model focused on similarity analysis. It supports AI trainers by evaluating whether training utterances positively influence the performance of the Dialogflow machine learning model. Finally, Our First GenAI feature helps summarization of the chat and capturing key details of each conversation, including account information and transaction specifics. This information is then sent to an agent, reducing the initial workload by an impressive 71%. On average, summaries are a mere 48 words, compared to the original 166-word conversations.

By utilizing these models, the team tapped into a database of curated data, reducing manual labor by thousands of hours in maintaining the organization’s chatbot. This enabled the chatbot to rapidly enhance its performance after launch, resulting in improved call containment, customer satisfaction, and ultimately, recognition with the 2023 Digital Transformation Award.
These models handle different aspects of natural language processing and evaluation and work together to provide a seamless and satisfying customer experience.

What You’ll Learn:
Launching a new AI product with no previous data puts immense pressure on the sustaining teams to detect, classify, and fix issues in the model. In the absence of out of the box solutions the teams can came up with the concept of building auxiliary AI models to sustain the conversational AI product (AI for AI).

Talk: Navigating LLM Deployment: Tips, Tricks and Techniques

Presenter:
Meryem Arik, CEO, TitanML

About the Speaker:
Meryem Arik is the Co-founder and CEO of TitanML, a pioneering company empowering enterprises to harness the full potential of Generative AI without compromising on data security.
A Forbes 30 Under 30 honoree, Meryem spent several years as a rates derivatives structurer at Barclays, covering major corporate, sovereign and supranational clients across EMEA. She holds a Master’s degree in Physics and Philosophy from the University of Oxford.

At TitanML, Meryem is on a mission to accelerate enterprise adoption of cutting-edge AI technologies by providing a secure and scalable foundation for building mission-critical applications. Under her leadership, TitanML has become the platform of choice for organizations seeking to leverage Generative AI while maintaining complete control over their sensitive data.

Talk Track: Applied Study Cases

Talk Technical Level: 3/7

Talk Abstract:
Unlock the power of self-hosted language models to drive innovation in financial services, healthcare, defense, and beyond. Join our expert session to learn industry best practices for optimizing, deploying, and monitoring these cutting-edge AI solutions in-house. Through real-world case studies, Meryem Arik, CEO of TitanML, will share practical tips to help you navigate the challenges and maximize the value of bringing large language models into your organization’s AI workflow. Walk away with the knowledge and confidence to leverage self-hosted LLMs to power your next-generation applications and maintain your competitive edge.

What You’ll Learn:
1. Best practices for optimizing, deploying, and monitoring self-hosted language models. The talk will provide practical tips and real-world case studies to guide attendees on effectively implementing these powerful AI solutions in-house.

2. Understanding the challenges and opportunities of self-hosted LLMs. Attendees will learn how to navigate the potential hurdles and maximize the value of integrating these cutting-edge language models into their organization’s AI workflow.

3. Confidence and knowledge to leverage self-hosted LLMs for building next-gen applications. The session aims to empower attendees with the insights and expertise needed to harness the power of self-hosted language models to drive innovation, maintain a competitive edge, and create applications in critical industries like finance, healthcare, and defense.

In essence, the talk focuses on equipping attendees with the practical know-how, strategic understanding, and inspiration to successfully adopt and utilize self-hosted LLMs within their enterprises to power transformative AI solutions.

Talk: Modular Solutions for Knowledge Management at scale in RAG Systems

Presenters:
Adam Kerr, Senior Machine Learning Engineer, Bell Canada | Lyndon Quadros, Senior Manager, Artificial Intelligence, Bell Canada

About the Speakers:
Adam is a senior machine learning engineer on Bell Canada’s Customer Op’s DEAI team. One of the key architects of Bell Canada’s ML Platform, Maverick. His primary objective: develop an opinionated set of products and configurations to deploy end-to-end machine learning solutions using recommended infrastructure, targeted at teams starting out on their ML journeys.

Lyndon Quadros has lead teams that build and manage ML, AI and Data Engineering solutions on the cloud at an enterprise scale; and currently leads an MLOps and ML Engineering team at Bell. His current work focuses on Generative AI applications, AI infrastructure and MLOps standards and processes.

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
An important component of any RAG system or application is the underlying knowledge base that the bot or application uses.

At Bell, we have built and adopted modular document embedding pipelines that allow some level of customization in the processing, ingestion and indexing of documents so that the peculiarities of various use cases and their raw documents can be efficiently indexed and used in their RAG applications. These pipelines also support both batch and incremental updates to the knowledge bases, with capabilities for automatically updating the indexes when documents are added to or removed from their source location. The modular nature also enables these pipelines to integrate with various document sources. These are supplemented with processes and conventions to ensure the efficient management and governance of the indexes at scale, providing a standardized framework for large-scale RAG applications at an enterprise level.

What You’ll Learn:
How to approach Embedding Pipelines and Document Management for RAGs in a hybrid batch / incremental fashion

Talk Title: From Concept to Value: Framework for Designing Generative Applications for the Enterprise

Presenter:
Vik Pant, Partner and Chief Data Scientist, PwC Canada

About the Speaker:
Vik is a researcher and practitioner of conceptual modelling for game-theoretic optimization. His scholarship and research are focused on strategic coopetition in complex multi-agent systems.

He is an Adjunct Professor in the Faculty of Information at the University of Toronto and the Department of Geography, Environment, and Geomatics at the University of Ottawa.
His academic research has been published in numerous peer-reviewed scholarly journals. These include the Journal of Strategic E-Commerce, the Journal of Electronic Commerce in Organizations, Information Security Journal: A Global Perspective, the Journal of Information System Security, Business Process Management Journal, Complex Systems Informatics and Modeling Quarterly, and the Journal of Information Technology Education.

He has also presented his academic research at refereed scholarly conferences and juried workshops including the Practice of Enterprise Modelling, the International Conference on Software Business, the International Conference on Information Resources Management, and the International Conference on Perspectives in Business Informatics Research.

Vik earned a doctorate from the Faculty of Information in the University of Toronto where his thesis was unanimously accepted by the examination committee As-Is and without any changes, a master’s degree in business administration with distinction from the University of London, a master’s degree in information technology from Harvard University, where he received the Dean’s List Academic Achievement Award, and an undergraduate degree in management information systems from Villanova University.

Talk Abstract:
The promise of generative AI is undeniable,
yet many organizations struggle to translate impressive prototypes into impactful, real-world applications. This disconnect often arises from treating generative application development as an exploratory, data science-driven exercise – akin to an academic laboratory setting – rather than a strategic software engineering endeavor aligned with business goals.

This talk showcases a conceptual modeling framework that transforms generative AI development from an academic lab exercise into a robust generative factory capability. This framework highlights the tight coupling that is needed between technical objectives with business goals to enable organizations to align their generative AI initiatives with their strategic imperatives.

Talk Title: Detecting AI-generated Content and Verifying Human Content with GPTZero

Presenter:
Alex Cui, CTO & Co-Founder, GPTZero

About the Speaker:
Alex Cui is co-founder and CTO of GPTZero, the world’s leading platform for detecting AI-generated text. He believes in bringing people together to create an internet where authentic human content can thrive. Alex has also presented at Capitol Hill, Facebook’s Policy team, and the Department of State, about how we can use technology to counter disinformation and political polarization. Previously, he worked on a machine learning engineering team at Facebook, R&D at Uber’s self-driving division, and published in leading machine learning conferences on understanding how people interact.

Talk Abstract:
Detecting AI-generated Content and Verifying Human Content with GPTZero

Talk Title: Compute Strategies for Generative AI

Presenter:
Avin Regmi, Engineering Manager ML, Spotify

About the Speaker:
Avin is an Engineering Manager at Spotify, leading the ML training and compute team for the Hendrix ML Platform. His areas of expertise include training and serving ML models at scale, ML infrastructure, and growing high-performing teams. The Hendrix ML Platform is now integral to Spotify’s core functions, such as search, ranking, and recommendations. Prior to joining Spotify, Avin led the ML Platform team at Bell. In this role, he focused on distributed training and serving LLMs. Additionally, Avin is the founder of Panini AI, which is a cloud solution that serves ML models at low latency using adaptive distributed batching. Outside of work, Avin practices yoga and meditation and enjoys high-altitude alpine climbing and hiking.

Talk Abstract:
Distributing computing efforts strategically to maximize resource utilization and minimize wastage. Investing in Spotify’s Hendrix ML Platform, which streamlines AI training and serving processes for models with over 70 billion parameters.

Sign Up for TMLS 2023 News Updates

Who Attends

Attendees
0 +
Data Practitioners
0 %
Researchers/Academics
0 %
Business Leaders
0 %

2023 Event Demographics

Highly Qualified Practitioners*
0 %
Currently Working in Industry*
0 %
Attendees Looking for Solutions
0 %
Currently Hiring
0 %
Attendees Actively Job-Searching
0 .0%

2023 Technical Background

Expert
17.5%
Advanced
47.3%
Intermediate
21.1%
Beginner
5.6%

2023 Attendees & Thought Leadership

Attendees
0 +
Speakers
0 +
Company Sponsors
0 +

Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.

Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.

Ignite what is an Ignite Talk?

Ignite is an innovative and fast-paced style used to deliver a concise presentation.

During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.

The result is a fun and engaging five-minute presentation.

You can see all our speakers and full agenda here

Get our official conference app
For Blackberry or Windows Phone, Click here
For feature details, visit Whova