
Over the past decade of hosting sessions at TMLS, from traditional ML to deep learning, LLMs, and now agentic systems, the most valuable lessons have rarely come from what worked perfectly.
They come from systems that had to be kept alive:
- models that drifted quietly after launch
- platforms that became expensive under real usage
- pipelines that broke in ways no experiment predicted
- tradeoffs that only surfaced months into production
Yet much of what gets amplified on conference stages is still shaped upstream, by trends, novelty, or what presents cleanly, rather than by long-term operational experience.
The 10th Annual Toronto Machine Learning Summit (TMLS 2026) is deliberately structured to counter that pattern.
The Gap Between Experimentation and Judgment
Most AI practitioners recognize the gap immediately.
Experimentation rewards clarity and speed.
Production rewards durability, judgment, and restraint.
But many conferences blur that distinction. As a result:
- early-stage ideas dominate agendas
- operational tradeoffs get compressed or skipped
- the people carrying production responsibility have limited influence over what’s shared
This isn’t a content quality problem.It’s a governance problem.
The Role of the TMLS Steering Committee
TMLS is shaped by a practitioner-led Steering Committee, an operational review body of thought leaders within our community.
Committee members are experienced practitioners who:
- review speaker submissions in detail
- evaluate relevance to real production environments
- pressure-test ideas against constraints like scale, cost, reliability, and governance
- identify recurring patterns across years of deployed systems
This process exists to answer one core question consistently:
Does this reflect insight earned through operating real systems, or just describing them?
That distinction guides what ultimately earns a place on the TMLS stage.
Why Production Experience Matters in Curation
The hardest ML lessons don’t announce themselves early.
They emerge over time:
- after systems are stressed
- after ownership transfers
- after initial success fades and maintenance begins
These lessons are often uncomfortable, incomplete, or context-heavy, which is exactly why they need experienced judgment to surface and frame them responsibly.
The steering committee exists to ensure:
- production-earned insight isn’t crowded out by surface-level success stories
- tradeoffs and constraints are treated as first-class knowledge
- the agenda reflects what actually advances practice, not just awareness
This is pattern recognition built over years, not a one-off selection exercise.
Who This Model Is For
The steering committee model is designed for people whose experience translates into judgment, across practice, research, and leadership, including:
- Senior applied ML engineers with production ownership
- ML platform owners accountable for reliability, cost, or long-term operation
- Infra and MLOps leads supporting deployed systems at scale
- Researchers whose work informs real systems and decisions beyond prototypes
- AI and data leaders responsible for technical direction, risk, and sustained impact
If you’ve lived with systems after launch, whether by building them, maintaining them, researching their limits, or being accountable for their outcomes, this model should feel familiar and necessary.
Learn More About Participating
TMLS asks practitioners to do more than submit or attend.
It invites them to help decide what deserves a stage.
Steering Committee members contribute by reviewing submissions, shaping the program, and, yes, helping amplify work they believe the community should see.
If you’re curious how the Steering Committee works, its role, responsibilities, and expectations, you can explore the details and assess whether participation aligns with your experience.