Welcome to TMLS 2024
Virtual Workshops & Talks - July 10th

Please click on the individual Join Now button to access the sessions

Fuel iX: An Enterprise Grade Gen AI Platform

Liz Lozinsky

Liz Lozinsky

Engineering Manager, Gen Ai Platform Team, TELUS

Sara Ghaemi

Sara Ghaemi

Senior Software Developer, Gen Ai Platform Team, TELUS

Scaling Vector Database Usage Without Breaking the Bank: Quantization and Adaptive Retrieval

Zain Hasan

Zain Hasan

Senior ML Developer Advocate, Weaviate

AI for AI-Scotiabank's Award-Winning ML Models

Narcisse Torshizi

Narcisse Torshizi

Data Scientist/Data Science Manager, Scotiabank

Andres Villegas

Andres Villegas

Data Scientist Manager, Scotiabank

Agentic AI: Unlocking Emergent Behavior in LLMs for Adaptive Workflow Automation

Patrick Marlow

Patrick Marlow

Staff Engineer, Vertex Applied AI Incubator, Google

Generative AI Design Patterns

Krishnachaitanya Gogineni

Krishnachaitanya Gogineni

Principal ML Engineer, Observe.AI

Building a Production-Grade Document Understanding System with LLMs

Ville Tuulos

Ville Tuulos

Co-Founder, Outerbounds

Eddie Mattia

Eddie Mattia

Data Scientist, Outerbounds

AI Agents with Function Calling/Tool Use

Aniket Maurya

Developer Advocate, Lightning AI

Uncertainty Quantification with Conformal Prediction: A Path to Reliable ML Models

Mahdi Torabi Rad

Mahdi Torabi Rad

President, MLBoost

Leveraging Large Language Models to Build Enterprise AI

Rohit Saha

Machine Learning Scientist, Georgian

Kyryl Truskovskyi

Kyryl Truskovskyi

Founder, ML Engineer, Kyryl Opens ML

Benjamin Ye

Benjamin Ye

Machine Learning Scientist, Georgian

Angeline Yasodhara

Angeline Yasodhara

Machine Learning Engineer, Georgian

From Chaos to Control: Mastering ML Reproducibility at Scale

Amit Kesarwani

Amit Kesarwani

Director, Solution Engineering, lakeFS

Navigating LLM Deployment: Tips, Tricks and Techniques

Meryem Arik

Meryem Arik

CEO, TitanML

Modular Solutions for Knowledge Management at Scale in RAG Systems

Adam Kerr

Adam Kerr

Senior Machine Learning Engineer, Bell Canada

Lyndon Quadros

Lyndon Quadros

Senior Manager, Artificial Intelligence, Bell Canada

Talk: Fuel iX: An Enterprise Grade Gen AI platform

Presenters:
Liz Lozinsky, Engineering Manager, Gen Ai Platform Team, TELUS | Sara Ghaemi, Senior Software Developer, Gen Ai Platform Team, TELUS

About the Speakers:
Liz is a Developer Advocate and Engineering Manager on the Platform Engineering team at TELUS. With a background in software development and a BASc from the University of Waterloo in Electrical Engineering with an option in Management Science, Liz leads a talented team focused on democratizing Gen AI for all. Known for her creativity, positivity, and a hint of whimsy, Liz approaches every challenge with enthusiasm and a spirit of adventure!

Sara is a Software Developer in the Gen AI Platform team at TELUS with background in both research and practical applications of software systems. She is one of the lead developers working on the Generative AI initiative at TELUS. She holds a Master’s degree in Software Engineering and Intelligent Systems from the University of Alberta for which she received the C.R. James Award for Best Master of Science Thesis award from the university. Sara is deeply passionate about leveraging her expertise to make technology more accessible and beneficial to all.

Talk Track: Applied Study Cases

Talk Technical Level: 4/7

Talk Abstract:
Sharing how TELUS enabled Gen AI for everyone internally through Fuel iX to get the most value out of the latest advancements in generative AI, while ensuring flexibility, control, privacy, trust and joy!

TELUS has been making incredible strides in AI and we’re at the forefront of scaling generative AI for our team members and customers. We’ve developed a suite of internal generative AI platforms and tools to empower our team members to safely experiment with this technology, fostering a culture of innovation and trust. With over 24,000 team members already utilizing our AI-powered tools in ways we never imagined, it’s clear that the potential for generative AI to enhance productivity and efficiency is immense. By automating repetitive tasks and providing valuable assistance, our AI tools enable team members to focus on innovation and problem-solving, ultimately driving positive change and progress.

What You’ll Learn:
– Building out enterprise grade Gen AI platforms
– The importance of responsible AI and ethical considerations in the development of Gen AI applications
– TELUS’s efforts in scaling generative AI for team members and customers
– The significant impact of AI tools in enhancing productivity and efficiency

Talk: Navigating LLM Deployment: Tips, Tricks and Techniques

Presenter:
Meryem Arik, CEO, TitanML

About the Speaker:
Meryem Arik is the Co-founder and CEO of TitanML, a pioneering company empowering enterprises to harness the full potential of Generative AI without compromising on data security.
A Forbes 30 Under 30 honoree, Meryem spent several years as a rates derivatives structurer at Barclays, covering major corporate, sovereign and supranational clients across EMEA. She holds a Master’s degree in Physics and Philosophy from the University of Oxford.

At TitanML, Meryem is on a mission to accelerate enterprise adoption of cutting-edge AI technologies by providing a secure and scalable foundation for building mission-critical applications. Under her leadership, TitanML has become the platform of choice for organizations seeking to leverage Generative AI while maintaining complete control over their sensitive data.

Talk Track: Applied Study Cases

Talk Technical Level: 3/7

Talk Abstract:
Unlock the power of self-hosted language models to drive innovation in financial services, healthcare, defense, and beyond. Join our expert session to learn industry best practices for optimizing, deploying, and monitoring these cutting-edge AI solutions in-house. Through real-world case studies, Meryem Arik, CEO of TitanML, will share practical tips to help you navigate the challenges and maximize the value of bringing large language models into your organization’s AI workflow. Walk away with the knowledge and confidence to leverage self-hosted LLMs to power your next-generation applications and maintain your competitive edge.

What You’ll Learn:
1. Best practices for optimizing, deploying, and monitoring self-hosted language models. The talk will provide practical tips and real-world case studies to guide attendees on effectively implementing these powerful AI solutions in-house.

2. Understanding the challenges and opportunities of self-hosted LLMs. Attendees will learn how to navigate the potential hurdles and maximize the value of integrating these cutting-edge language models into their organization’s AI workflow.

3. Confidence and knowledge to leverage self-hosted LLMs for building next-gen applications. The session aims to empower attendees with the insights and expertise needed to harness the power of self-hosted language models to drive innovation, maintain a competitive edge, and create applications in critical industries like finance, healthcare, and defense.

In essence, the talk focuses on equipping attendees with the practical know-how, strategic understanding, and inspiration to successfully adopt and utilize self-hosted LLMs within their enterprises to power transformative AI solutions.

Talk: AI for AI-Scotiabank's Award-Winning ML Models

Presenter:
Narcisse Torshizi, Data Scientist/Data Science Manager, Scotiabank | Andres Villegas, Data Scientist Manager, Scotiabank

About the Speaker:
Narcisse Torshizi is an NLP and AI Data Scientist at Scotiabank, who has a PhD in Neurolinguistics. She has 10 years of experience in data and analytics and is specialized in the development of AI products and the related LLM trainings.

Andres Villegas is a trilingual (English, French, Spanish) Conversational AI Expert with over 7 years of experience in designing voice and chatbot interactions across various industries. With a Master’s degree in Engineering and a Professional Development Certificate in Data Science and Machine Learning, I have led multiple successful projects. Currently, I am part of the global Machine Learning and Artificial Intelligence group at Scotia Bank, where I have implemented the first customer-facing Gen AI feature and conducted extensive analytics to optimize chatbot performance. I am passionate about leveraging NLP, UX design, and automaton to drive digital transformation and enhance user interactions.

Talk Track:
Applied Study Cases

Talk Technical Level: 5/7

Talk Abstract:
A brief overview of four innovative models that power and improve a chatbot solution

Last year, Scotiabank was awarded the 2023 Digital Transformation Award by IT World Canada for our customer support chatbot. This achievement was made possible through the implementation of auxiliary AI models that helped the team develop the chatbot (“”AI for AI””). These auxiliary models enabled the automation of the conversations review, supported NLU training, and allowed for scalability as the adoption of the chatbot increased. Besides, we have recently leveraged LLMs for summarizing chatbot interactions when a chatbot session is handed over to an agent (when the chatbot cannot fulfil the customer’s request).

The chatbot solutions that we have developed and deployed is a result of combining various machine learning and statistical models. These models handle distinct aspects of natural language understanding, processing, and evaluation. Launching a new chatbot with no previous data puts immense pressure on the sustaining teams to detect, classify, and fix issues in the chatbot. In the absence of out of the box solutions the team came up with the concept of building auxiliary AI models to sustain the chatbot (AI for AI). We will describe the major features and achievements of four models that sustain our award-winning chatbot: Luigi, EVA, Peach and GenAI summarization.

Luigi is a machine learning model that takes the confidence threshold of the chatbot’s answers as either correct or incorrect. It uses a supervised learning approach to learn from the feedback of human reviewers and adjust the threshold accordingly. EVA is a machine learning classification model that processes customer inputs to predict their intent. It works in conjunction with Google Dialogflow. Peach is a natural language understanding model focused on similarity analysis. It supports AI trainers by evaluating whether training utterances positively influence the performance of the Dialogflow machine learning model. Finally, Our First GenAI feature helps summarization of the chat and capturing key details of each conversation, including account information and transaction specifics. This information is then sent to an agent, reducing the initial workload by an impressive 71%. On average, summaries are a mere 48 words, compared to the original 166-word conversations.

By utilizing these models, the team tapped into a database of curated data, reducing manual labor by thousands of hours in maintaining the organization’s chatbot. This enabled the chatbot to rapidly enhance its performance after launch, resulting in improved call containment, customer satisfaction, and ultimately, recognition with the 2023 Digital Transformation Award.
These models handle different aspects of natural language processing and evaluation and work together to provide a seamless and satisfying customer experience.

What You’ll Learn:
Launching a new AI product with no previous data puts immense pressure on the sustaining teams to detect, classify, and fix issues in the model. In the absence of out of the box solutions the teams can came up with the concept of building auxiliary AI models to sustain the conversational AI product (AI for AI).

Talk: Agentic AI: Unlocking Emergent Behavior in LLMs for Adaptive Workflow Automation

Presenter:
Patrick Marlow, Staff Engineer, Vertex Applied AI Incubator, Google

About the Speaker:
Patrick is a Staff Engineer on the Vertex Applied AI Incubator team, where he focuses on building tooling and reusable assets to extend Google’s cutting-edge LLM technologies. He specializes in the Conversational AI ecosystem, working with products such as Vertex Agents, Vertex Search and Conversation, and Dialogflow CX. Previously, he was an AI Engineer in Google’s Professional Services Organization. Prior to Google, he was the Principal Data Architect at Levelset, a construction technology company, specializing in NLP Data Pipelines and OCR tooling. Patrick also worked as the Director of Engineering and Data Science for Amelia.ai, a Conversational AI company, delivering chat and voice bots to Fortune 500 clients across the Banking, Hospitality, Entertainment, and Retail verticals.

Patrick studied Electrical Engineering at the University of Texas at Austin.
He is the author and maintainer of several Open Source works including CXLint, an automatic linter for Dialogflow CX, and the Dialogflow CX Scripting API (SCRAPI) which is utilized by developers worldwide to supercharge bot building and analysis in Dialogflow CX.

Talk Track: Applied Study Cases

Talk Technical Level: 4/7

Talk Abstract:
Explore the emergent capabilities of “agentic” AI, where agents combine LLMs, reasoning loops, and tools to tackle complex workflows beyond the capabilities of LLMs alone. This session examines techniques for fostering this intelligence, enabling agents to adapt and self-direct their actions for unparalleled workflow automation. Attendees will leave with a deeper understanding of agentic AI and strategies for its impactful implementation.

What You’ll Learn:
A few things that attendees can take away from this session are:
– How agentic AI involves goal-oriented behavior, adaptability, and a degree of autonomy within AI systems.
– How AI systems can continuously learn and improve their workflow management capabilities
– How to explore practical techniques for implementing agentic AI within their workflows
– Best practices for system design and architecture of Agentic AI systems

Talk: Generative AI Design Patterns

Presenter:
Krishnachaitanya Gogineni, Principal ML Engineer, Observe.AI

About the Speaker:

Krishna Gogineni is a Principal Engineer at Observe.AI, leading the company’s Generative AI stack. He specializes in integrating and productionizing large language models and other advanced architectures to solve product use cases, expertly balancing accuracy/quality with cost/latency. With a solid background in platform engineering and machine learning, Krishna excels in applying state-of-the-art research to industry use cases at scale, ensuring economic viability. Outside of work, he enjoys writing, attending local hackathons and startup events.

Talk Track: Research or Advanced Technical

Talk Technical Level: 3/7

Talk Abstract:
In this presentation, we delve into the expansive world of generative AI design patterns, selecting five pivotal examples to explore in depth: Retrieval Augmented Generation (RAG), Cluster Pulse, State Based Agents, Guard Rails, and Auto-Prompting. These patterns represent a subset of the broader spectrum of generative AI techniques, each offering unique insights into how we can enhance the capabilities and safety of AI systems. RAG provides a method for enriching AI responses with external data, Cluster Pulse fosters creativity in AI outputs, State Based Agents ensure AI actions are aligned with specific objectives, Guard Rails establish boundaries for AI behavior, and Auto-Prompting facilitates more dynamic and context-aware interactions with AI models.

The application of these patterns is demonstrated through the development of the Personalized K-8 Tutor, a project that showcases the synergistic potential of combining multiple generative AI design patterns. This educational tool leverages the strengths of each pattern to create a customized learning experience that adapts to the unique needs and preferences of individual students. By focusing on these five patterns, the presentation aims to provide attendees with a clear understanding of how generative AI can be harnessed to create innovative and impactful solutions, while also highlighting the vast array of other patterns waiting to be explored in the field of generative AI.

What You’ll Learn:
Understanding of three critical generative AI design patterns: Retrieval Augmented Generation (RAG) for enhancing AI responses with external information, State Based Agent for managing AI behavior, and Cluster Pulse for fostering AI creativity.
Insight into the practical application of these design patterns in building intelligent and adaptive AI systems.
Hands-on experience in integrating these patterns into a comprehensive project, the Personalized K-8 Tutor, showcasing their potential to revolutionize educational technology.
Appreciation of the importance of design patterns in structuring and optimizing generative AI solutions for real-world challenges.
Knowledge of how to leverage generative AI to create innovative, user-centric applications that push the boundaries of traditional software engineering.

Workshop: Building a Production-Grade Document Understanding System with LLMs

Presenters:
Ville Tuulos, Co-Founder, Outerbounds | Eddie Mattia, Data Scientist, Outerbounds

About the Speakers:
Ville Tuulos is a co-founder and CEO of Outerbounds, a developer-friendly ML/AI platform. He has been developing infrastructure for ML and AI for over two decades in academia and as a leader at a number of companies. At Netflix, he led the ML infrastructure team that created Metaflow, a popular open-source, human-centric foundation for ML/AI systems. He is also the author of a book, Effective Data Science Infrastructure, published by Manning.

Eddie Mattia is a data scientist with a background in applied math, and experience working in a variety of customer-facing and R&D roles. He currently works at Outerbounds to help customers and open-source practitioners build machine-learning systems and products.

Building AI developer tools and many applications on top of them!

Talk Track: Workshop

Talk Technical Level: 3/7

Talk Abstract:
LLMs can be used to process troves of unstructured text automatically, e.g. to discover patterns, summarize and classify content, and enhance existing ML models through embeddings.

In this workshop, we will build a realistic document understanding system that reads live, large-scale data continuously from a data warehouse, queries state-of-the-art LLMs (cost-) efficiently, and uses the results to power various use cases.

The system is powered by open-source Metaflow and open models, so you can apply the blueprint easily in your own environment.

What You’ll Learn:
You will learn how to build and operate a realistic document understanding system powered by state-of-the-art LLMs.

Prerequisite Knowledge (if required)
Basic knowledge of Python

Workshop: AI Agents with Function Calling/Tool Use

Presenter:
Aniket Maurya, Developer Advocate, Lightning AI

About the Speaker:
Aniket, a Machine Learning – Software Engineer with with over 4 years of experience, demonstrating a strong track record in developing and deploying machine learning models to production.

Talk Track: Workshop

Talk Technical Level: 4/7

Talk Abstract:
Learn about Agentic workflows with LLM tool use. Generate structured JSON output and execute external tools/functions.

What You’ll Learn:
By the end of this workshop you will learn how to build AI Agents and make use of function calling with OpenAI and open-source LLMs

Prerequisite Knowledge (if required)
Python and LLM fundamentals

Workshop: Uncertainty Quantification with Conformal Prediction: A Path to Reliable ML Models

Presenter:
Mahdi Torabi Rad, President, MLBoost

About the Speaker:
Mahdi Torabi Rad, Ph.D. is a computational scientist, engineer, self-trained software developer, mentor, and YouTube content creator with over 10 years of experience in developing mathematical, statistical, and machine-learning models, as well as computer codes to predict complex phenomena. He has published in top-tier journals of Physics, Engineering, and ML and has extensive experience as an ML Lead in various DeepTech startups. Mahdi is also the YouTuber behind the channel MLBoost, known for its popular videos on ML topics, including Conformal Prediction, which have garnered tens of thousands of views in less than a year.

Talk Track: Workshop

Talk Technical Level: 5/7

Talk Abstract:
In today’s high-stakes applications ranging from medical diagnostics to industrial AI, understanding and quantifying uncertainty in machine learning models is paramount to prevent critical failures. Conformal prediction, also known as conformal inference, offers a practical and robust approach to create statistically sound uncertainty intervals for model predictions. What sets conformal prediction apart is its distribution-free validity, providing explicit guarantees without relying on specific data distributions or model assumptions.

This hands-on workshop reviews the core concepts of conformal prediction, demonstrating its applicability across diverse domains such as computer vision, natural language processing, and deep reinforcement learning. Participants will gain a deep understanding of how to leverage conformal prediction with pre-trained models like neural networks to generate reliable uncertainty sets with customizable confidence levels.

Throughout the workshop, we’ll explore practical theories, real-world examples, and Python code samples, including Jupyter notebooks for easy implementation on real data. From handling structured outputs and distribution shifts to addressing outliers and models that abstain, this workshop equips attendees with the tools to navigate complex machine learning challenges while ensuring model reliability and trustworthiness.

What You’ll Learn:
– What sets conformal prediction apart from other methods of uncertainty quantification?
– The principles and theory behind conformal prediction for uncertainty quantification in machine learning
– Techniques for creating statistically rigorous uncertainty sets/intervals using conformal prediction
– How to apply conformal prediction to pre-trained machine learning models, such as neural networks, for reliable uncertainty quantification
– Hands-on experience with implementing conformal prediction in Python using libraries like scikit-learn and NumPy
– Examples showcasing the application of conformal prediction in diverse domains such as financial forecasting, natural language processing (NLP), and computer vision

Prerequisite Knowledge (if required)
Basic understanding of machine learning concepts, including model training and evaluation.
Familiarity with Python programming and libraries such as NumPy, Pandas, and scikit-learn

Workshop: Leveraging Large Language Models to Build Enterprise AI

Presenters:
Rohit Saha, Machine Learning Scientist, Georgian | Kyryl Truskovskyi, Founder, ML Engineer, Kyryl Opens ML | Benjamin Ye Machine Learning Scientist, Georgian | Angeline Yasodhara Machine Learning Engineer, Georgian

About the Speakers:
Rohit is a Machine Learning Scientist on Georgian’s R&D team, where he works with portfolio companies to accelerate their AI roadmap. This includes scoping research problems to building ML models to moving them into production. He has over 5 years of experience developing ML models across Vision, Language and Speech modalities. His latest project entails figuring out how businesses can leverage Large Language Models (LLMs) to address their needs. He holds a Master’s degree in Applied Computing from the University of Toronto, and has spent 2 years at MIT and Brown where he worked at the intersection of Computer Vision and domain adaptation.

Kyryl is a seasoned ML professional, currently based in Canada. With a rich 9-year background in ML, he has evolved from hands-on coding to architecturing key ML business solutions.

Ben is a Machine Learning Engineer at Georgian, where he helps companies to implement the latest techniques from ML literature. He obtained his Bachelor’s from Ivey and Master’s from Penn. Prior to Georgian, he worked in quantitative investment research.

Angeline is a Machine Learning Scientist at Georgian, collaborating with companies to accelerate their AI product development. Before joining Georgian, she was a research assistant at the Vector Institute, working at the intersection of machine learning and healthcare, focusing on explainability and causality. From explainability, time series, outlier detection to LLMs, she applies the latest techniques to enhance product differentiation.

Talk Track: Workshop

Talk Technical Level:  3/7

Talk Abstract:
Generative AI is poised to disrupt multiple industries as enterprises rush to incorporate AI in their product offerings. The primary driver of this technology has been the ever-increasing sophistication of Large Language Models (LLMs) and their capabilities. In the first innings of Generative AI, a handful of third-party vendors have led the development of foundational LLMs and their adoption by enterprises. However, development of open-source LLMs have made massive strides lately, to the point where they compete or even outperform their closed-source counterparts. This competition presents an unique opportunity to enterprises who are still navigating the trenches of Generative AI and how best to utilize LLMs to build enduring products. This workshop (i) showcases how open-source LLMs fare when compared to closed-source LLMs, (ii) provides an evaluation framework that enterprises can leverage to compare and contrast different LLMs, and (iii) introduces a toolkit to enable easy fine-tuning of LLMs followed by unit-testing (https://github.com/georgian-io/LLM-Finetuning-Toolkit)

What You’ll Learn:
By the end of this workshop, learn how to create instruction-based datasets, fine-tune open-source LLMs via ablation studies and hyperparameter optimization, and unit-test fine-tuned LLMs.

Prerequisite Knowledge (if required)
Python + Familiarity with concepts such as prompt designing and LLMs

Talk: From Chaos to Control: Mastering ML Reproducibility at Scale

Presenter:
Amit Kesarwani, Director, Solution Engineering, lakeFS

About the Speaker:
Amit heads the solution architecture group at Treeverse, the company behind lakeFS, an open-source platform that delivers a Git-like experience to object-storage based data lakes.

Amit has 30+ years of experience as a technologist working with Fortune 100 companies as well as start-ups. Designing and implementing technical solutions for complicated business problems.

As an entrepreneur, he launched a cloud offering to provide Data Warehouse as a Service. Amit holds a Master’s certificate in Project Management from George Washington University and a bachelor’s degree in Computer Science and Technology from Indian Institute of Technology (IIT), India. He is the inventor of the patent: System and Method for Managing and Controlling Data

Talk Track: Virtual Talk

Talk Technical Level: 6/7

Talk Abstract:
Machine learning workflows are not linear, where experimentation is an iterative & repetitive to and fro process between different components. What this often involves is experimentation with different data labeling techniques, data cleaning, preprocessing and feature selection methods during model training, just to arrive at an accurate model.

Quality ML at scale is only possible when we can reproduce a specific iteration of the ML experiment–and this is where data is key. This means capturing the version of training data, ML code and model artifacts at each iteration is mandatory. However, to efficiently version ML experiments without duplicating code, data and models, data versioning tools are critical. Open-source tools like lakeFS make it possible to version all components of ML experiments without the need to keep multiple copies, and as an added benefit, save you storage costs as well.

What You’ll Learn:
In this talk, you will learn how to use a data versioning engine to intuitively and easily version your ML experiments and reproduce any specific iteration of the experiment.

This talk will demo through a live code example:
• Creating a basic ML experimentation framework with lakeFS (on Jupyter notebook)
• Reproducing ML components from a specific iteration of an experiment
• Building intuitive, zero-maintenance experiments infrastructure

Talk: Modular Solutions for Knowledge Management at scale in RAG Systems

Presenters:
Adam Kerr, Senior Machine Learning Engineer, Bell Canada | Lyndon Quadros, Senior Manager, Artificial Intelligence, Bell Canada

About the Speakers:
Adam is a senior machine learning engineer on Bell Canada’s Customer Op’s DEAI team. One of the key architects of Bell Canada’s ML Platform, Maverick. His primary objective: develop an opinionated set of products and configurations to deploy end-to-end machine learning solutions using recommended infrastructure, targeted at teams starting out on their ML journeys.

Lyndon Quadros has lead teams that build and manage ML, AI and Data Engineering solutions on the cloud at an enterprise scale; and currently leads an MLOps and ML Engineering team at Bell. His current work focuses on Generative AI applications, AI infrastructure and MLOps standards and processes.

Talk Track: Research or Advanced Technical

Talk Technical Level: 6/7

Talk Abstract:
An important component of any RAG system or application is the underlying knowledge base that the bot or application uses.

At Bell, we have built and adopted modular document embedding pipelines that allow some level of customization in the processing, ingestion and indexing of documents so that the peculiarities of various use cases and their raw documents can be efficiently indexed and used in their RAG applications. These pipelines also support both batch and incremental updates to the knowledge bases, with capabilities for automatically updating the indexes when documents are added to or removed from their source location. The modular nature also enables these pipelines to integrate with various document sources. These are supplemented with processes and conventions to ensure the efficient management and governance of the indexes at scale, providing a standardized framework for large-scale RAG applications at an enterprise level.

What You’ll Learn:
How to approach Embedding Pipelines and Document Management for RAGs in a hybrid batch / incremental fashion

Sign Up for TMLS 2023 News Updates

Who Attends

Attendees
0 +
Data Practitioners
0 %
Researchers/Academics
0 %
Business Leaders
0 %

2023 Event Demographics

Highly Qualified Practitioners
0 %
Currently Working in Industry*
0 %
Attendees Looking for Solutions
0 %
Currently Hiring
0 %
Attendees Actively Job-Searching
0 %

2023 Technical Background

Expert/Researcher
18.5%
Advanced
44.66%
Intermediate
27.37%
Beginner
9.39%

2023 Attendees & Thought Leadership

Attendees
0 +
Speakers
0 +
Company Sponsors
0 +

Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.

Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.

Ignite what is an Ignite Talk?

Ignite is an innovative and fast-paced style used to deliver a concise presentation.

During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.

The result is a fun and engaging five-minute presentation.

You can see all our speakers and full agenda here

Get our official conference app
For Blackberry or Windows Phone, Click here
For feature details, visit Whova