Canada’s Summit for Applied AI
Bringing together the researchers, practitioners, and leaders putting AI into practice across Canada.
Platinum Partner
REGISTRATION NOW OPEN
JUNE 16-19 / CIBC SQUARE, TORONTO
Why Attend
Now in its 10th year, TMLS hosts cutting-edge research, hands-on workshops, & vetted industry case studies all reviewed by the Committee.
We emphasize community, learning, and accessibility.
LEARN
60+ Speakers
LEARN
4-Day Summit
LEARN
400+ Attendees
Preview our Sessions
TRADITIONAL ML
Dynamic Models: Testing, Governance and Implementation
Frederic Marier / CIBC
GEN AI DEPLOYMENTS
Leveraging Cost-effective GenAI to Enable Compliance While Boosting Efficiency
Pierre-Luc Vaudry / National Bank of Canada
MLOPS FOR SMALLER TEAMS
Getting Your Custom AI Inferencing Pipeline Started with Vector’s AI Deployment Bootcamp Reference Implementation
AI ETHICS AND GOVERNANCE
Guide to Responsible Governance of GenAI in Organizations
Lucas Hartman / Western University
Shabnam Hassani / Vector Institute for Artificial Intelligence
Featured Speakers
Dawn Song
Dawn Song
ABOUT THE SPEAKER:
Dawn Song is a Professor in Computer Science at UC Berkeley and Co-Director of Berkeley Center for Responsible Decentralized Intelligence. Her research interest lies in AI safety and security, Agentic AI, deep learning, security and privacy, and decentralization technology. She is the recipient of numerous awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, ACM SIGSAC Outstanding Innovation Award, and more than 10 Test-of-Time Awards and Best Paper Awards from top conferences in Computer Security and Deep Learning. She has been recognized as Most Influential Scholar (AMiner Award), for being the most cited scholar in computer security. She is an ACM Fellow and an IEEE Fellow, and an Elected Member of American Academy of Arts and Sciences. She obtained her Ph.D. degree from UC Berkeley. She is also a serial entrepreneur and has been named on the Female Founder 100 List by Inc. and Wired25 List of Innovators.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
TBA
WHAT YOU’LL LEARN:
TBA
Ion Stoica
Ion Stoica
ABOUT THE SPEAKER:
Ion Stoica is a Professor in the EECS Department and holds the Xu Bao Chancellor Chair at the University of California, Berkeley. He is the Director of the Sky Computing Lab and the Executive Chairman of Databricks and Anyscale. His current research focuses on AI systems and cloud computing, and his work includes numerous open-source projects such as vLLM, SGLang, Chatbot Arena, SkyPilot, Ray, and Apache Spark. He is a Member of the National Academy of Engineering, an Honorary Member of the Romanian Academy, and an ACM Fellow. He has also co-founded several companies, including LMArena (2025), Anyscale (2019), Databricks (2013), and Conviva (2006).
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
TBA
WHAT YOU’LL LEARN:
TBA
Manuela Veloso
Manuela Veloso
ABOUT THE SPEAKER:
From 2018 to 2026, Manuela Veloso was the founder and Head of JPMorganChase AI Research & Herbert A. Simon University Professor Emerita at Carnegie Mellon University, where she was faculty in the Computer Science Department and then Head of the Machine Learning Department.
Veloso has a licenciatura degree in Electrical Engineering and an M.Sc. in Electrical and Computer Engineering from Instituto Superior Técnico, Lisbon, an M.A. in Computer Science from Boston University, and a Ph.D. in Computer Science from Carnegie Mellon University. Veloso has Doctorate Honoris Causa degrees from the Örebro University, Sweden, the Instituto Universitário de Lisboa (ISCTE), Portugal, the Université de Bordeaux, France, and the Universidade Católica of Portugal.
She served as president of the Association for the Advancement of Artificial Intelligence (AAAI), and she is co-founder and a Past President of the RoboCup Federation. She is a fellow of main professional organizations in her area, namely AAAI, IEEE, AAAS, and ACM. She is the recipient of the ACM/SIGART Autonomous Agents Research Award, the Einstein Chair of the Chinese Academy of Sciences, an NSF Career Award, and the Allen Newell Medal for Excellence in Research. Veloso is a member of the National Academy of Engineering with a citation “for contributions to artificial intelligence and its applications in robotics and the financial services industry.” She is also a member of the Academy of Sciences of Portugal.
Her research interests are in AI, including Autonomous Robots, Multiagent Systems, Continual Learning Agents, and AI in Finance. For further details, see www.cs.cmu.edu/~mmv.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
I will talk about AI agents, and multiagent systems, in particular. I will focus on the agent’s perception as the robust processing and sharing of information, the agent’s cognition as their planning and memory-based reasoning abilities, and the agent’s action as the capabilities to execute in their environment. While AI has the potential to assist humans with many tasks, the future aims at a seamless integration of humans and AI with AI agents able to collaborate and continuously learn. The talk will include examples of robot and digital agents.
WHAT YOU’LL LEARN:
AI Agents have limitations, they rely on other agents and humans to improve their performance over time.
Freddy Lecue
Freddy Lecue
ABOUT THE SPEAKER:
Freddy Lecue is a Managing Director and Head of Frontier AI Model Methodology at Wells Fargo, where he architects and scales Generative AI, agentic AI, and advanced machine learning models for enterprise production, while balancing performance, latency, cost, and risk.
He leads the firm’s AI research agenda, elevates modeling standards through targeted training, and establishes best-practice frameworks to enhance robustness, scalability, and model validation. Freddy also drives AI-enabled transformation across the end-to-end model lifecycle, including development, documentation, testing, and validation.
Prior to Wells Fargo, he held senior AI leadership roles at JPMorgan Chase, Thales Canada, Accenture Ireland, and IBM Ireland. He holds a Ph.D. in Computer Science and is based in New York City.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
TBA
WHAT YOU’LL LEARN:
Armando Benitez
Armando Benitez
ABOUT THE SPEAKER:
Armando Benitez is the Chief Data & Analytics Officer (CDAO) and Head of AI at BMO Capital Markets. He leads a team of engineers, strategists, and AI professionals who create end-to-end solutions at the intersection of Finance and Technology.
As CDAO, Armando shapes the strategic vision for data and analytics, integrating AI into business processes to drive innovation and improve decision-making. His leadership promotes data-driven insights and aligns technological initiatives with business goals.
Armando joined BMO’s ETF desk in 2016 after working on data products for fraud detection and recommender systems at Paytm. With a background in High Energy Physics, he brings a unique perspective to the team.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
AI agents are moving rapidly from experimental prototypes to production systems embedded in critical business workflows. In regulated environments such as capital markets, deploying agents requires more than model performance. It requires governance, reliability, human oversight, and a clear path to measurable value.
WHAT YOU’LL LEARN:
We will discuss architectural patterns, governance frameworks, and operational lessons learned from deploying agents that interact with real data, real clients, and real risk.
Muhammad Mamdani
Muhammad Mamdani
ABOUT THE SPEAKER:
Dr. Mamdani is Clinical Lead – Artificial Intelligence at Ontario Health and Director of the University of Toronto Temerty Faculty of Medicine Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM). Previously, Dr. Mamdani was Vice President of Data Science and Advanced Analytics at Unity Health Toronto where his team deployed over 50 AI solutions to improve patient outcomes and hospital efficiency. Dr. Mamdani is also Professor in the Department of Medicine of the Temerty Faculty of Medicine, the Leslie Dan Faculty of Pharmacy, and the Institute of Health Policy, Management and Evaluation of the Dalla Lana School of Public Health. He is also an Affiliate Scientist at IC/ES and a Faculty Affiliate of the Vector Institute. In 2024, Dr. Mamdani’s team received the national Solventum Health Care Innovation Team Award by the Canadian College of Health Leaders. Also in 2024, Dr. Mamdani was named international AI Leader of the Year by AIMed. Previously, Dr. Mamdani was named among Canada’s Top 40 under 40. He has published over 600 studies in peer-reviewed medical journals. Dr. Mamdani obtained a Doctor of Pharmacy degree (PharmD) from the University of Michigan (Ann Arbor) and completed a fellowship in pharmacoeconomics and outcomes research at the Detroit Medical Center. During his fellowship, Dr. Mamdani obtained a Master of Arts degree in Economics from Wayne State University with a concentration in econometric theory. He then completed a Master of Public Health degree from Harvard University with a concentration in quantitative methods.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Artificial intelligence has the potential to transform healthcare yet its adoption has been slow. This presentation will review the potential for AI in healthcare using real world examples and discuss the challenges in its adoption.
WHAT YOU’LL LEARN:
TBA
Steven Waslander
Steven Waslander
ABOUT THE SPEAKER:
Prof. Steven Waslander is a leading authority on autonomous robotics, including self-driving cars and multirotor drones. He received his B.Sc.E.in 1998 from Queen’s University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics. He was recruited to the University of Waterloo from Stanford in 2008, where he led the Autonomoose project, the first self-driving car to be tested on public roads by a Canadian university. In 2018, he joined the University of Toronto Institute for Aerospace Studies (UTIAS), and founded the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab).
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Agentic reasoning for robots is rapidly becoming a reality, allowing flexible natural language interaction with human operators and enabling a wide range of navigation, object handling and recall tasks in a variety of settings. In this talk, Prof. Waslander will discuss the ongoing efforts in his lab to make useful agentic robots for the warehouse and outdoor settings, by integrating open world perception with agentic reasoning for reliable open world navigation, and by adding multi-faceted memory – spatial, descriptive and visual – to enable experience recall for temporal question answering. Together, these advances allow a wide variety of spatial, semantic, functional and temporal tasks to be completed by robots without any fine-tuning to specific domains.
WHAT YOU’LL LEARN:
Scaffolding around agent needed to make spatial intelligence possible, big gap between primary LLM /MLLM uses and robotics, lots to explore.
Travis DePuy
Travis DePuy
ABOUT THE SPEAKER:
Travis is an expert solutionizer who likes long walks in the park and tinkering with interesting technology.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Fine-tuning feels like the natural next step when your model isn’t performing — but it’s often the wrong one. Before committing to a training run, it’s worth asking: have you fully exhausted what you can achieve without touching the weights?
In this talk, we’ll break down the tradeoffs between prompt optimization and fine-tuning — when each approach earns its cost, and what the signals look like in practice. We’ll make it concrete using Weights & Biases Models and Weave, walking through a real evaluation workflow that tracks experiments, surfaces behavioral differences, and helps you measure whether a change actually moved the needle.
There’s no universal answer to which approach wins. But there is a better way to find out — and it starts with having the right evals in place before you make the call.
WHAT YOU’LL LEARN:
Alet Blanken
Alet Blanken
ABOUT THE SPEAKER:
Alet Blanken is Vice President of AI Engineering at Workday, where she leads the strategy, development, and deployment of Generative AI solutions that transform analytics across Looker, BigQuery, and large-scale databases. With over 15 years of experience building and leading high-performing engineering teams at Google Cloud, Amazon Web Services, and ACI Worldwide, she operates at the intersection of Generative AI and data analytics to deliver scalable, secure, and production-ready systems. Her work spans LLMs, retrieval-augmented generation (RAG), anomaly detection, and predictive modeling to unlock actionable insights and automate complex analytical workflows. Alet holds degrees in Information Technology and Industrial Psychology, along with a PMP and AWS Solutions Architect certifications, and brings a rare blend of deep technical expertise and human-centered leadership to the TMLS stage.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Every software company claims to be becoming an AI company. In practice, most are re‑running the wrong playbook: treating AI like another infrastructure migration instead of the current shift in how products are designed, shipped, and operated. In this talk, Alet Blanken, VP of AI Engineering at Workday, shares a practitioner’s playbook for that transition, grounded in Workday’s journey building agentic systems in HR and finance at scale. She’ll cover why AI is not analogous to on‑prem → SaaS, how product design must start from the first demo and traffic patterns, and why code is now the cheapest part of the stack. Attendees will see how Workday structures its architecture around durable systems of record and action, fast iteration loops on real usage data, and a culture that treats reliability, latency, and trust as first‑class metrics. The goal is to leave with a realistic picture of what it takes for a software company to truly operate as an AI company.
WHAT YOU’LL LEARN:
Ketan Umare
Ketan Umare
ABOUT THE SPEAKER:
Ketan Umare is Co-Founder and CEO of Union.ai, an AI development infrastructure company helping organizations build, deploy, and scale production AI. Union.ai provides a single platform that unifies infra-aware orchestration, model training, inference, and compliance, enabling teams to escape pilot purgatory and ship AI faster.
Ketan is also a leading contributor to Flyte, the open-source, Kubernetes-native AI/ML orchestrator used by 3,500+ companies. He led the original engineering team behind Flyte, building it to power dynamic, large-scale, and fault-tolerant AI workflows. Today, Union builds on that foundation to help enterprises operationalize mission-critical AI systems with lower costs, faster iteration cycles, and production-grade reliability.
Prior to founding Union, Ketan held senior engineering leadership roles at Amazon, Oracle, and Lyft, where he worked on large-scale distributed systems and data platforms.
In his spare time, he enjoys spending time with his two daughters and exploring the outdoors.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Agent demos are easy; durable production agents are not. While tools like Claude Code and OpenClaw simplify prototyping, teams still need to manage the code, context, tools, and infrastructure that make agents work in real environments. This talk breaks down the orchestration stack behind production agents: how to make them observable, debuggable, and durable, and how to design for recovery when failures happen across reasoning, tool use, networking, and execution. Drawing from real-world engineering experience, the session will outline practical patterns for building self-healing agent systems that can operate reliably in production.
WHAT YOU’LL LEARN:
Hanieh Arjmand
Hanieh Arjmand
ABOUT THE SPEAKER:
Hannah Arjmand is a Lead AI Engineer with a Ph.D. in Biomedical Engineering from the University of Toronto. She leads the development of LLM systems in regulated industries, with a focus on post-training and evaluation. Her track record spans healthcare AI and enterprise insurance applications, and includes a filed patent in multimodal AI and peer-reviewed publications. Hannah is a regular presenter at applied AI conferences.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
Deploying large language models (LLMs) in regulated decision-support settings presents a unique evaluation challenge: models follow explicit, multi-step instructions that produce domain-specific outputs, not simple classifications. Standard benchmarks do not capture whether a model correctly applies prescribed reasoning steps, produces recommendations from task-specific taxonomies, or maintains consistency with established decision criteria. When transitioning between production models, evaluation must assess both correctness against ground truth and relative output quality, often with limited labeled data.
We propose a multi-stage evaluation framework developed during a production model transition from a dense Model A to Model B. Our system generates structured assessments across multiple task domains, where each decision task has distinct instruction sets, output formats, and recommendation categories.
The framework addresses three core problems. First, instruction-aware label extraction: since model outputs are free-text narratives rather than structured labels, we use a secondary LLM classifier with task-specific prompts derived from the same instructions given to the primary model, ensuring extracted labels align with the intended recommendation taxonomy. We show that naive mappings misrepresent model accuracy and that aligning extraction categories to prompt instructions improved measurement fidelity. Second, complementary evaluation under label scarcity: we combine offline accuracy on expert-labeled data with pairwise LLM-as-judge comparisons on unlabeled production data, providing both absolute and relative quality signals. Third, training data evolution during model transitions: each model interprets instructions through its own learned style, structuring outputs differently, emphasizing different aspects of the prompt, and producing distinct narrative patterns even when given identical instructions. When the new model’s outputs are used to generate training data for future iterations, these stylistic differences propagate into the ground truth. Annotators reviewing outputs must recalibrate to the new model’s conventions, and labels created under Model A may not transfer cleanly to Model B. We found that switching models requires regenerating outputs for annotator review and updating training data to reflect the new model’s instruction-following behavior, rather than assuming compatibility with existing annotations.
We identify several limitations of LLM-as-judge evaluation that practitioners should account for. The judge exhibited verbosity bias, preferring longer, more detailed outputs regardless of correctness, which risks rewarding over-generation over precision. The judge also showed limited domain calibration: it could identify structural and stylistic differences between outputs but struggled to assess whether a specific recommendation was appropriate given the underlying data, a judgment that requires domain expertise the judge model lacks. Finally, the judge’s quality preferences did not always align with recommendation accuracy. In one task domain, the judge preferred Model B’s outputs 64.3% of the time, yet Model A had higher accuracy on the overall recommendation task, highlighting that perceived quality and decision correctness are distinct dimensions that require separate measurement.
Across several hundred labeled samples spanning multiple decision types, our framework revealed performance differences obscured by earlier approaches, including a failure mode where one model defaulted to a single prediction class on 96% of inputs for one task, visible only after correcting the label taxonomy. We discuss implications for practitioners evaluating LLMs in instruction-heavy, domain-specific production settings where ground truth is scarce and automated judges are imperfect proxies for expert assessment.
WHAT YOU’LL LEARN:
Deepkamal Gill
Deepkamal Gill
ABOUT THE SPEAKER:
Deepkamal Kaur Gill is a Senior Applied AI Scientist at Vanguard, where she builds production-grade LLM systems for high-stakes financial applications. Her work spans data generation, post-training, and evaluation, with a focus on building reliable, low-latency AI systems under real-world constraints.
Deepkamal holds a Master’s in Computer Science from the University of Toronto and is an active contributor to the AI community through research, mentorship, and initiatives supporting women in technology. At TMLS, she brings a practitioner’s perspective on what it truly takes to scale LLMs in production.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
While recent advances in LLMs emphasize improved model capabilities, many systems fail to scale in real-world production settings. Beyond a certain point, adding GPUs or data yields diminishing returns: training stops scaling efficiently, hardware remains underutilized, and inference latency is dominated by system constraints rather than compute. These failures are often silent, poorly documented, and difficult to diagnose in distributed environments.
In this talk, we share lessons from building enterprise-scale domain LLM systems, focusing on the system-level bottlenecks that limit scaling in practice. We examine failure modes across distributed training and inference—including communication overhead, pipeline imbalance, numerical instability during training as well as memory-bound decoding, KV cache growth, and throughput–latency tradeoffs at inference—and show how they manifest in production systems.
Rather than introducing new modeling techniques, this session presents a practical, symptom-driven approach to debugging: identifying failure patterns, tracing their root causes, and applying targeted mitigations. The key takeaway is that scaling LLMs is fundamentally a systems problem, and attendees will leave with a concrete framework to diagnose bottlenecks and make better design decisions when moving from prototype to production.
WHAT YOU’LL LEARN:
Afsaneh Fazly
Afsaneh Fazly
ABOUT THE SPEAKER:
Afsaneh Fazly holds a PhD in AI from the University of Toronto and brings over two decades of experience advancing intelligent systems across academia, industry, and startups. She currently serves as AI Research Director at RBC Borealis. Her work spans foundation models, language and multimodal intelligence, and applied machine learning, with a focus on translating research into real-world impact. She has contributed extensively to the AI research community through publications and patents, and has led and mentored large multidisciplinary teams of engineers, scientists, and researchers. Her career reflects a consistent ability to bridge scientific rigor with large-scale system design and deployment.
TALK TITLE:
TRACK:
SUB TOPIC:
ABSTRACT:
For the past two decades, enterprise software has largely followed the SaaS model: applications organize work through dashboards, forms, and APIs, while humans interpret information and coordinate tasks across multiple tools. Advances in large language models and agentic systems are beginning to change that structure. AI agents can now interpret user intent, retrieve context across systems, call tools, and execute multi-step workflows. As a result, software is gradually shifting from static interfaces toward systems that can actively perform work.
This shift does not mean SaaS disappears. Instead, applications increasingly become execution layers that agents interact with, while reasoning and workflow orchestration move into an agentic control layer above them. In legal workflows, for example, an agent can review large volumes of contracts, extract key clauses, compare them to internal policies, and surface potential risks for human review. In construction and engineering projects, agents can analyze plans, specifications, and contracts together to identify inconsistencies or obligations before they become costly issues in the field. The central question is therefore not whether AI replaces SaaS, but where durable advantage moves when software systems can reason over context and dynamically orchestrate work.
This talk explores how the emerging agentic stack is reshaping the software landscape and what it means for organizations building or deploying AI-driven products. It is intended for founders, product leaders, engineers, and executives who want to understand how AI agents are likely to transform software design, product strategy, and competitive advantage.
WHAT YOU’LL LEARN:
Several practical lessons emerged from the work that led to this talk.
First, organizations should start with workflows rather than models. The greatest value often comes from augmenting complex, multi-step processes rather than introducing isolated AI features.
Second, agentic systems are most effective when built on top of existing infrastructure. Rather than replacing current systems, AI can orchestrate tasks across them, allowing organizations to unlock value without rebuilding their entire stack.
Finally, a strong understanding of the science behind LLMs and agentic frameworks helps leaders make better architectural decisions. Understanding how models reason, retrieve context, and interact with tools makes it easier to design systems that are reliable, scalable, and aligned with real business needs.
Tickets
Choose Your Pass
Join Canada’s summit for applied AI with ticket options for in-person and virtual attendance.
$647.18
incl. CA$23.73 Fee / incl. CA$74.45 Tax
Sales end on Jun 18, 2026
General Admission
Each ticket includes:
- Access to virtual sessions on June 16th
- Access to register for in-person workshops on June 19th (limited capacity based on first come first served)
- Access to in-person talks on June 17th & 18th
- Access to pre-summit & post-summit parties
- Access to event app Whova
- Access to post-summit in-person videos
$530.94
incl. CA$20.86 Fee / incl. CA$61.08 Tax
Sales end on Jun 18, 2026
Startup/Student/ Academic Admission
Each ticket includes:
- Access to virtual sessions on June 16th
- Access to register for in-person workshops on June 19th (limited capacity based on first come first served)
- Access to in-person talks on June 17th & 18th
- Access to pre-summit & post-summit parties
- Access to event app Whova
- Access to post-summit in-person videos
$589.27
Sales end on Jun 18, 2026
Group Tickets 3+ ppl
Each ticket includes:
- Access to virtual sessions on June 16th
- Access to register for in-person workshops on June 19th (limited capacity based on first come first served)
- Access to in-person talks on June 17th & 18th
- Access to pre-summit & post-summit parties
- Access to event app Whova
- Access to post-summit in-person videos
$200.30
incl. CA$20.86 Fee / incl. CA$61.08 Tax
Sales end on Jun 18, 2026
General Admission (Live Stream) Virtual / Live Stream Admission
Each ticket includes:
- Access to virtual workshops on June 16th
- Remote access to live streaming in-person talks June 17-18
- Access to ALL post-summit videos (3-4 weeks post event)
Partners
Backed by Teams Building the Future of Applied AI
TMLS is supported by organizations that build the tools, platforms, and infrastructure shaping applied AI in Canada.
Community Partner
Latest Articles
TRADITIONAL ML
The TMLS Agentic Hackathon: Five Days to Ship Something Real
TRADITIONAL ML
Your Prompt, Our Print: Design the Official TMLS 10th Anniversary T-Shirt
TRADITIONAL ML
The Biggest Constraint Facing the TMLS 2026 Committee, And What It Reveals About Evals Pt. 3
TRADITIONAL ML
The Biggest Constraint Facing the TMLS 2026 Committee, And What It Reveals About Evals Pt. 2
TRADITIONAL ML
Unspoken, Unmeasured, Undeniable: The Lived Experiences of Women in Data & AI and Our Hope for Designing a Better Future
TRADITIONAL ML
The Biggest Constraint Facing the TMLS 2026 Committee, and What It Reveals About Evals (Pt. 1.)
FAQ
Who should attend TMLS?
Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Where will TMLS 2026 be held?
Can I speak at TMLS?
Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Where should I stay for the event?
Where should I stay for the event?
Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, link adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.