Submit to Speak
Summit Details
- June 13 → Virtual session day (talks + workshops)
- June 16-17 → In-person talks at CIBC Square
- June 18 → In-person workshops at MaRS
- Deadline to submit → April 30th
TMLS is working with community members to re-imagine a collaborative “connected-community”.
We’re working to empower our community members and propel successful AI applications, and the use of AI research on a local and global stage.
Our community members are developing AI/ML effectively and responsibly across all Industries.
Conference Agenda Tracks
Data Preparation and Processing
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Foundational talks with no prerequisites. Covers basic data cleaning, transformation, and pipeline automation.
Example Talk: Intro to Data Cleaning: Best Practices & Pitfalls
Level: 200 – Intermediate
Description: Covers practical data processing techniques, including handling missing data, feature engineering, and scaling pipelines. May introduce ML-oriented preprocessing.
Example Talk: Scaling Data Processing with Apache Spark or Other Distributed Computing Tools, Deep Learning Use Cases
Level: 300 – Advanced
Description: Focuses on optimizing pipelines, integrating ML models in preprocessing, and handling high-dimensional/multimodal data.
Example Talk: LLM-Based Data Augmentation & Synthetic Data Generation, Optimizations, and Cost Management
Level: 400 – Expert
Description: Explores cutting-edge techniques in data processing for LLMs, OCR, and multimodal AI. Assumes deep expertise.
Example Talk: Building Robust Data Pipelines for Multimodal AI: Text, Image, and Audio
Vertical Enterprise AI Agents in Production
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. It offers an introduction to Vertical AI agents that builds the foundation for advanced concepts. Could be a business case study.
Example Talk: Descriptions Introduction to agent orchestration
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk. Should assume users have built or used a basic agent. Should not spend more than 20% on an intro to agents. Should include key takeaways with real life examples. Using a specific framework should show a demo in action.
Example Talk: Pros and cons of multi agent vs single agent orchestration
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference. This talk should be advanced material that has novelty or handles a complex case study. Attendees would find limited information on this type of information on the internet.
Example Talk: Evaluations for comparing DSPY optimized customer support agents vs hand developed prompts
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Technical design decisions when choosing abstraction patterns for Langchain
Gen AI Deployments In Regulated Industries
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. Focus on simpler case studies or an introduction to regulation.
Example Talk: Intro to GDPR complaint GenAI
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk: LLM Customer Support in mobile banking
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference.
Example Talk: How OSFI regulates LLMs usage
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Deploying LLM models with differential privacy a technical deep dive
AI For Productivity Enhancements
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. Focus on simpler case studies or an introduction to regulation.
Example Talk: Intro to GDPR complaint GenAI
Recommended Ratio: 30%
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk: LLM Customer Support in mobile banking
Recommended Ratio: 30%
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference.
Example Talk: How OSFI regulates LLMs usage
Recommended Ratio: 30%
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Deploying LLM models with differential privacy a technical deep dive
Recommended Ratio: 10%
MLOPS For Smaller Teams
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk.
Example Talk: Intro to MLOps for small teams
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk: Building an effective data storage pipeline with limited resources
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference.
Example Talk: Automating evaluation and monitoring of your LLM in production
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Cost effective implementation of federated learning for computer vision
AI Ethics And Governance Within The Organization
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introduce fundamental concepts such as why ethical AI matters and how people influence AI development and deployment. Explore AI governance scope, discuss AI regulatory and compliance challenges and what features and functionality are required for the tools that support AI Governance processes.
Example Talk:
- AI Literacy & Culture: creating a shared understanding of AI, AI Risks and AI Governance concepts.
- What does AI literacy mean, and how do we assess it?
- Why ethical AI matters. Understanding AI fairness and implications of bias.
- How people influence AI development.
- Understanding AI bias and how to recognize it.
- Regulatory and compliance challenges that drive the need for AI Governance.
- Regulatory landscape (including discussion about standards): EU AI Act, GDPR, US regulations, ISO and NIST
- AI Governance support tools - features and functionality
Level: 200 – Intermediate
Description: Dive deeper into how to actualize organizational values when building or using AI.
Build a foundational understanding of AI governance and introduce vital disciplines that underpin AI governance.
Explore explainability, output evaluation, and the assessment of AI system performance from quantitative and qualitative perspectives.
Example Talk:
- How we define and communicate AI values within our organization.
- AI Governance Maturity is a journey and ours has just begun.
- AI Risk Management fundamentals.
- User experience for useful AI
- AI Governance primer. What is AI governance? An introduction to principles, risks, and responsibilities.
- AI Privacy and Security - A summary
- AI Product Management
- Data Governance for compliant AI - avoid privacy breaking “garbage in, garbage out”
- AI Governance by Design
- AI System Explainability
- AI Quantitative Performance Beyond Accuracy
- How do we evolve our performance and governance understanding as new techniques pop up? (e.g. Agentic…)
- Taking AI back to school, designing performance rubrics
Level: 300 – Advanced
Description: Explore the implications of AI in human decision-making.
Dive into practical implementation strategies for ensuring responsible AI development.
AI auditing practices and their role within a broader AI governance framework.
Example Talk:
- What should we call our AI practices and compliance? “Governed? Responsible?”
- Why has it been so hard?!
- AI governance in high-stakes domains: healthcare, finance, and law enforcement.
- AI-augmented decision-making.
- AI Governance frameworks and operating models
- How AI Governance fits into Enterprise Governance and why it must.
- Who should own AI Governance? Business? Product manager? IT?
- AI Governance and AI product lifecycle.
- AI Governance roles, responsibilities, and accountabilities.
- Defining processes and controls to implement AI Governance.
- Responsible AI Development Toolkit - Principles in Action
Level: 400 – Expert
Description: Shaping the future of AI ethics and governance in their respective contexts.
Best practices of AI Governance implementation.
AI Governance tools vendor landscape
Example Talk:
- Lessons learned - how we built AI that is governed, safe, etc. What worked and what didn’t and why (from organizational alignment and cultural perspective).[This can be possibly a fireside or a roundtable]
- What can we learn from our successes and failures?
- Defining policies framework to address AI governance across the organization
- What can we learn from our successes and failures.
- Role of organizational change management in the adoption of AI governance
- Auditing practices of AI systems
- AI Governance Support tools (Holistic AI, Watson Governance +)
Agent Zero-to-Hero
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Talks/workshops/etc. that discuss what Agents are. Introduction to the idea of Agentic AI, hands off.
Could follow without understanding what Agents “are”.
As in: ReAct Explainers, etc.
Example Talk: “What are Agents, anyway?”
Level: 200 – Intermediate
Description: Talks/workshops/etc. that dive into workflows or specific agentic case studies. Agents in “Prod”.
Example Talk: “How Telus is Leveraging Agents in Production”
Level: 300 – Advanced
Description: Specific Agent Frameworks, Agent Evaluation, SWE-Bench style agentic flows, Multi-Model Agents, Agents Leveraging Reasoning Models, etc.
Example Talk: “Evaluating End to End Agent Traces with XYZ”
Level: 400 – Expert
Description: Multi-Agent Workflows, Alternate Agent formulations (Graph Agents, etc.)
Example Talk: “Creating Deep Research with XYZ”
Multimodal LLMs
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Basic workings of multi-modal LLMs
Example Talk: How does a multimodal LLM work? – GPT-4’s multimodal capabilities, basic image & text tasks OR real world applications
Level: 200 – Intermediate
Description: Efficient Training of Vision-Language Models + Audio and Text LLMs
Example Talk: Joint training of multiple modalities, how to increase alignment across multiple modalities, GPT-Voicemode
Level: 300 – Advanced
Description: Advanced Reasoning in Multimodal Models
Example Talk: SpatialVLM’s spatial reasoning dataset and chain-of-thought prompting
Level: 400 – Expert
Description: Unified architectures and joint optimizations in post training
Example Talk: Reduce language only bias of LLMs: MDPO: Conditional Preference Optimization for
Multimodal Large Language Models
Hardware Platforms
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: Basic: Foundational LLM acceleration concepts
Description:
- Quantization: Reducing model size
- Hardware Acceleration: Architectures of HW Acceleration
Example Talk:
- Quantizing Neural Networks for Efficient AI Inference
- Introduction to AI Accelerator architectures
Level: Basic: Intermediate: Practical LLM optimization and hybrid systems
Description:
- Hybrid LLM
- Inference optimization techniques
Example Talk:
- Hybrid LLM (using both integrated GPU and NPU on edge devices)
- Advance quantization and optimization techniques for LLMs
- Sparsity in training and inference
Level: Advanced: Cutting-edge architectures and scaling.
Description:
- New architectures And performance Comparisons
Example Talk:
- AMD ROCm and MI300
- Cerebras WSE-3: Training Trillion-Parameter LLMs. d-Matrix DIMC: Ultra-Low Latency for Interactive AI" (d-Matrix).
- Groq LPUs: 10x Faster Inference at 1/10th the Energy
- Tenstorrent’s ASICs for LLM training.
Level: Expert: Frontier research and visionary ideas
Description:
- Advanced techniques and future trends
Example Talk:
- Mixture of expert, multi-modality
- Ultra long context in LLMs
Inference Scaling
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Track Goal
The aim for the Inference Scaling track is to shed light on the journey of “taking ML-powered applications from 1 to 100” – showcasing both the established patterns and cutting-edge innovations that enable machine learning to operate reliably at massive scale. Whether it’s serving millions of users, handling billions of requests, or optimizing for cost and performance, this track explores what it takes to build robust inference systems in the real world.
Target Audience
This track is for anyone tackling ML deployment challenges or for those who want to optimize / scale their current system. The focus is on providing practical insights.
Track Rubric
If you’re submitting to speak, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talks exploring business, operational, and infrastructural challenges of scaling inference. No technical prerequisites required.
Example Talk:
- The Hidden Costs of Inference: Budgeting and Managing ML Infrastructure at Scale.
- Building and Managing ML Platform Teams: Organizational Strategies for Scaling Inference
- From Lab to Production: Business Considerations When Deploying ML at Scale
- Lessons Learned from developing search solutions at scale
Potential Speaker Persona:
- ML/AI Product Managers
- Engineering Directors
- MLOps Team Leads
- Technical Program Managers
Suggested Session Type: Panel Discussion
Level: 200 – Intermediate
Description: Talks exploring practical scaling approaches using established tools and architectures. Focus on implementations with popular open-source (opt.) components and standard design patterns.
Example Talk:
- Building Scalable Resilient ML Pipelines with Kafka, Redis and Ray
- Horizontal Scaling Patterns for Inference: Load Balancers, Service Meshes, and Auto-scaling Groups
- How to deploy efficient Hybrid Search at Scale
- Containerization Strategies for ML Workloads: Kubernetes, Docker Swarm, and Nomad
- Deploying LLM at scale using vLLM
Potential Speaker Persona:
- Senior ML Engineers
- MLOps Engineers
- Infrastructure Engineers
- DevOps Specialists
- ML Platform Engineers
Level: 300 – Advanced
Description: Advanced talks showcasing systems where at least one component involves a novel innovation or significant optimization beyond standard implementations. These innovations may span modeling, infrastructure, pipeline design, or system architecture.
Example Talk:
- Novel Model: Macro Graph Neural Networks: Scaling GNNs to Billion-Node Graphs for Real-time Recommendations
- Novel Algo: Faster, more flexible byte-pair tokenizer from Github.
- Novel Infra: Building Vector Store for Billion-Scale Embeddings: (CUDA-Accelerated ANN Search)
- Novel System Arch: Event-driven Inference Architecture: Predictive Model Pipeline Optimization for High-throughput Time Series Processing
- Novel optimizations: Deploying LLM at scale with custom optimizations like distillation, pruning, 2D parallelism, chucked prefill, custom inference server, etc..
Potential Speaker Persona:
- Principal/Staff ML Engineers
- Research Engineers
- ML Architects
- Performance Engineers
Level: 400 – Expert
Description: Expert-level discussions featuring systems with multiple novel components or single breakthroughs of exceptional significance. These talks should represent the cutting edge of inference scaling research and engineering.
Example Talk:
- Novel Model: Mamba-MoE: Combining State Space Models with Mixture-of-Experts
- Novel Model Component: Optimized Attention (Multi-Head Latent Attention, Ring Attention) Mechanisms
- Novel System Arch: Scaling test time compute
- Novel optimization: Kernel optimizations for Specialized ML Hardware Accelerators. E.g. Liger LinkedIN
- Novel optimization: Efficient optimizers Adam, AdaFactor, Adalayer, etc..
Potential Speaker Persona:
- Distinguished Engineers
- Principal Researchers
- Chief Scientists
- Technical Fellows
- Research Leads
Guidelines for Talks
-
Broad ML Scope, Unified by Inference
While LLMs are a hot topic, this track is not limited to them. We should aim for diversity in our talks on scaling inference across various ML applications like forecasting, recommendation systems, computer vision, real-time fraud detection, and beyond. The unifying theme is inference at scale, regardless of the ML domain. -
Overlapping Topics, But a Clear Focus
Some talks may naturally intersect with other tracks—such as How to deploy AI agents in production, which could fit into an agent-focused track. However, the defining element of our track is the journey to scale—whether it’s the technical innovations, engineering decisions, or operational strategies that made it possible. -
Real-World Case Studies Over Theoretical Discussions
Talks should be grounded in personal, real-world experiences rather than broad generalizations or hypothetical concepts. We prefer case studies that present:
• Clear, validated results showing system performance before and after scaling—supported by research papers, blog posts, open-source projects, or internal evaluations.
• Deep analysis of previous system limitations, the decision-making process, and the trade-offs considered.
• Practical takeaways—insights, lessons learned, and concrete recommendations that attendees can apply in their own work. -
Substance Over Sales Pitches
Speakers from tooling companies and compute providers are welcome, but talks should go beyond just a product demo. If the talk feels like an extended sales pitch, it’s not the right fit for this track.
Future Trends
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk.
Example Talk: Intro to roles in the ML and AI Space
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites of career responsibilities. Talk should be accessible to most audiences but impactful to those in a given situation.
Example Talk:
Should I become a ML manager?
5 lessons learned from being a staff engineer
Preparing for a ML interview
Level: 300 – Advanced
Description: A talk focused on specific career situations and how to navigate them. Certain situations might be harder to learn from without first hand experience.
Example Talk:
Transitioning from Academia to Industry
5 differences between AI and Standard Product management
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Deploying LLM models with differential privacy a technical deep dive
Recommended Ratio: 10%
Careers
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. Focus on simpler case studies or an introduction to regulation.
Example Talk: Intro to GDPR complaint GenAI
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites of career responsibilities. Talk should be accessible to most audiences but impactful to those in a given situation.
Example Talk:
Should I become a ML manager?
5 lessons learned from being a staff engineer
Preparing for a ML interview
Level: 300 – Advanced
Description: A talk focused on specific career situations and how to navigate them. Certain situations might be harder to learn from without first hand experience.
Example Talk:
Transitioning from Academia to Industry
5 differences between AI and Standard Product management
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Deploying LLM models with differential privacy a technical deep dive
Recommended Ratio: 10%
Executive Track
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. Focus on simpler case studies or an introduction to regulation.
Example Talk: Intro to GDPR complaint GenAI
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk: LLM Customer Support in mobile banking
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference.
Example Talk: How OSFI regulates LLMs usage
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Deploying LLM models with differential privacy a technical deep dive
Track Overview
The Executive Track is designed to provide business leaders with actionable insights on AI implementation, strategy, and governance. Sessions are tailored to different expertise levels, ensuring value for executives at any stage of their AI journey.
Themes & Subthemes
-
100 Level: Introduction to AI & Business Impact
• What is AI? - Essential concepts, terminology, and technologies.
• AI vs. Traditional Automation - Key differences and complementary strengths.
• Common AI Use Cases Across Industries - Practical applications with proven ROI.
• Ethical AI & Business Risk Management - Identifying and mitigating potential issues. -
200 Level: AI Strategy & Competitive Advantage
• Aligning AI with Business Strategy - Frameworks for integration and prioritization.
• AI-Driven Decision Making - Enhancing executive decision processes with AI insights.
• AI & Customer Experience Transformation - Creating value through personalization and efficiency.
• Building AI-Ready Organizations - Developing talent, culture, and infrastructure. -
300 Level: AI Implementation & Scaling
• Measuring AI ROI & Performance Metrics - Frameworks for quantifying business impact.
• Overcoming AI Adoption Challenges - Strategies for addressing technical and organizational barriers.
• Scaling AI Solutions Enterprise-Wide - Moving from pilots to production systems.
• Case Studies of Successful AI Transformations - Learning from industry leaders across sectors. -
400 Level: AI Governance, Risk, and Future Trends
• AI Governance & Risk Management - Building robust oversight mechanisms.
• AI Regulation & Compliance Strategies - Navigating the evolving regulatory landscape.
• The Future of AI in Enterprise Strategy - Preparing for emerging technologies and capabilities.
• AI Ethics, Bias, and Security Challenges - Advanced approaches to responsible AI deployment. -
Speaker Selection Criteria
• Demonstrated expertise in AI strategy, implementation, or governance.
• Experience at executive or senior leadership level.
• Ability to communicate complex concepts in business-relevant terms.
• Preference for speakers with cross-industry experience or notable case studies. -
Expected Outcomes for Attendees
• Beginner Level: Understanding of AI fundamentals and relevant business applications
• Intermediate Level: Strategic frameworks for AI implementation and organizational alignment
• Advanced Level: Practical approaches to scaling AI and measuring business impact
• Expert Level: Sophisticated strategies for AI governance and future-proofing -
Interactive Components
We recommend incorporating the following interactive elements:
• Executive roundtables for peer-to-peer learning.
• AI demonstration showcases featuring enterprise-ready solutions.
• Networking sessions specifically for business leaders.
• Interactive workshops for practical strategy development.
Traditional ML
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk.
Example Talk:
Intro to forecasting
Regression in the real world
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk:
Intro to Modern BERT
Recommendation Systems on a budget
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the conference.
Example Talk:
Forecasting in a world of virality
Comparing traditional OCR to Gen AI methods
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and senior attendees should look forward to these talks.
Example Talk: Open questions in Tabular data
Opensource Model Finetuning (Workshop Track)
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Introductory talk with no pre-reqs. A highschooler should be able to follow along and many attendees in higher tracks could give this talk. General prerequisites of coding.
Example Talk: Intro to finetuning LLaMa
Level: 200 – Intermediate
Description: An intermediate talk with light prerequisites that could be acquired during the course of the conference. A light intro may be given in this talk.
Example Talk: Comparing SFT to DPO in LLM finetuning
Level: 300 – Advanced
Description: An advanced talk aimed at sharing best practices. These talks should get people excited to attend the workshops. Can assume learners have significant prerequisites and read a corresponding paper.
Example Talk: Training a LLM from scratch
Level: 400 – Expert
Description: An expert level course that requires significant prerequisites. This talk is designed for peers with expertise and requires significant prerequisites.
Example Talk: The traps of fine-tuning a MoE
Advanced RAG (Workshop Track)
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Level: 100 – Beginner
Description: Advanced Text Retrieval Strategies, RAG in Production case studies
Example Talk: “How Telus is Leveraging RAG in Production”
Recommended Ratio: 30%
Level: 200 – Intermediate
Description: Advanced Text Retrieval Strategies, RAG in Production case studies
Example Talk: “How Telus is Leveraging RAG in Production”
Level: 300 – Advanced
Description: Multilingual RAG, Real Time RAG, RAG with Agents, Fresher RAG techniques (KV Cache goodness can fit here), Intro. To Graph RAG, Evaluating RAG
Example Talk: “International RAG: How to Design One System for Everyone”
Level: 400 – Expert
Description: Graph RAG Case Studies, Multimodal Retrieval, RAG at Scale
Example Talk:
Negative Results
If you’re submitting to speak in this track, review the example rubric below for context before selecting your talk’s technical level (100-400).
Track Goal
The path to progress in AI is paved not just with successes but with what’s learned from what didn’t work. The goal of the Negative Results Track is to encourage speakers, particularly experts in the field, to share the untold stories of failure in the AI community, ranging from prosaic setbacks to spectacular catastrophes. This track seeks diverse submissions that challenge conventional thinking, spark debate, and ultimately help our community more efficiently build resilient, responsible, and beneficial AI systems.
Target Audience
Standing on the shoulders of giants means not repeating the mistakes they did, but giants rarely share their fumbles. This track offers rarely shared insights anyone in the industry will be eager to learn from.
Level: 100 – Beginner
Description: Introductory talks exploring common pitfalls and learning experiences in AI development.
No prerequisite knowledge.
Example Talk: How to Break into AI by Learning to Fail Fast with Acme.edu
Level: 200 – Intermediate
Description: Talks diving into specific challenges and failures from various AI subfields.
Will require a light primer or some prerequisite knowledge.
Example Talk: Intro to Evaluation Strategies & Pitfalls to Avoid: GenAI Models in Production at Acme Inc
Level: 300 – Advanced
Description: Analyses of significant AI failures requiring in-depth deep-dives on complex AI systems or business domains.
Will require a moderate primary and some prerequisite knowledge of either AI-, domain- or business-concepts
Example Talk: Behind-the-Scenes Challenges Building and Deploying Multi-agent Systems at Acme Inc
Level: 400 – Expert
Description: Talks exploring cutting-edge research or application, e.g. topics of AI safety, robustness, and failure prevention at scale.
Designed for senior, technical peers with extensive prerequisite knowledge.
Example Talk: Mitigating Adversarial Attacks in the Wild: Experiences from Acme AI R&D Lab
Submission Guidelines
- Real-world > Theory: Talks must share genuine experiences and perspectives based on personal, real-world scenarios and events, not hypotheticals or broad generalizations or observations. Think case studies.
- Practical Insights: Go beyond simply recounting failures. Analyze the underlying reasons, the decision-making process, and the lessons learned. Provide takeaways and recommendations for the audience to apply in their own work.
- Honesty and Transparency: Encourage open and honest discussions about failures, fostering a community culture of learning from mistakes.
- Diverse Perspectives: Submissions are welcome from various domains and backgrounds (e.g., research, applied engineering, business).
- Focus on Failure: Talks may end with a happy ending or a silver lining, but the majority of the talk contents should align with the spirit of the track and share lessons learned from failure.