Reproducibility & Data Version Control
for LangChain & LLM/OpenAI Models

FREE Virtual Workshop
Nov. 29th,1PM EST

Proudly Sponsored By

Presenter: Amit Kesarwani
Director of Solution Engineering
lakeFS by Treeverse

Toronto Machine Learning Summit on NLP

September 12th to 13th, 2022 – 10:00 AM to 8:30 PM

A Uniquely Interactive Experience

Join us for the NL Community’s Annual Gathering.

9 speakers will explore applications of NLP from both the business and technical areas of expertise plus 4 bonus hands-on virtual workshops.

The micro-summit includes:

  • 9 speakers (in-person) at RBC Waterpark Place
  • 4 workshops (virtual)
  • Access 6 hours of live-streamed content (incl. recordings)
  • Talks for beginners/intermediate & advanced
  • Case Studies, Executive Track – Business Alignment & Advanced Technical Research
  • Q+A with Speakers
  • Channels to share your work with community

 

Join this new initiative to help push the AI community forward.

We’re Hosting

Breakout Sessions
(All Levels)
Discussion Groups
Workshops
Virtual Platform

Chair

Suhas Pai

Chief Technology Officer, Bedrock AI

Speakers

Royal Sequiera

Research Scientist, LG Electronics Toronto AI Lab

Talk: Generalization Through Interactive Environments

Denys Linkov

ML Team Lead, Voiceflow

Talk: Three Courses of Real Time NLP

Ian Yu

Junior Machine Learning Engineer, Groupby Inc.

Talk: Our Quest to Build a Global Privacy Protection Standard

Co-Presenter: Hessie Jones

Hessie Jones

Venture Partner, MATR Ventures

Talk: Our Quest to Build a Global Privacy Protection Standard

Co-Presenter: Ian Yu

Suhas Pai

Chief Technology Officer, Bedrock AI

Talk: NLP in Finance

Co-Presenters: Dr. Ehsan Amjadian, Dr. Patricia Arocena

Dr. Patricia Arocena

Head of Innovation Labs, RBC

Talk: NLP in Finance

Co-Presenters: Dr. Ehsan Amjadian, Suhas Pai

Dr. Ehsan Amjadian

Head of Data Science, RBC

Talk: NLP in Finance

Co-Presenters: Dr. Patricia Arocena, Suhas Pai

Talk: An Efficient Deep Enterprise Search Engine on Private Cloud

Co-Presenter: Syed Salman Ali

Syed Salman Ali

Data Science Lead, RBC

Talk: An Efficient Deep Enterprise Search Engine on Private Cloud

Co-Presenter: Dr. Ehsan Amjadian

Jekaterina Novikova
Director of ML, Winterlight Labs

Talk: Interpretability and Robustness of Transformer Models in Healthcare

Annie En-Shiun Lee

Assistant Professor (Teaching Stream), Computer Science, University of Toronto

Talk: Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

Karthik Ramakrishnan

President & Co-Founder, Armilla AI

Talk: Quality Assurance of NLP Systems
Co-Presenter: Rahm Hafiz
Rahm Hafiz
Co-Founder, Armilla AI

Talk: Quality Assurance of NLP Systems

Co-Presenter: Karthik Ramakrishnan

Workshop Facilitators

Brendan M McKenna

ML Field Engineer, ContinualAI

Workshop: Operationalizing State of the Art Language Models

Annie En-Shiun Lee

Assistant Professor (Teaching Stream), Computer Science, University of Toronto

Workshop: Pre-Trained Multilingual Sequence-to-Sequence Models for NMT: Tips, Tricks and Challenges
Amanda Milberg

Data Scientist, Dataiku

Workshop: Natural Language Processing in Plain English

More to be announced

Platinum Sponsor
Gold Sponsors
Silver Sponsors
Bronze Sponsors
Community Partners
Royal Sequiera

Research Scientist, LG Electronics Toronto AI Lab

I am a Research Scientist at LG Electronics Toronto AI Lab. Previously I worked at Ada Support Inc. as a Sr. Research Scientist, and Microsoft Research India as a Research Fellow. I obtained my Masters degree from University of Waterloo.
My research interests include Question Answering, Information Retrieval, and Procedural Learning. Apart from research, I actively mentor students from India and East Africa. In 2018, I founded Sushiksha, a mentoring program that mentors hundreds of engineering and medical students in India. In my free time, I like to read, go woodworking, and learn new languages.

Talk: Generalization Through Interactive Environments

Abstract: Sequential decision making tasks cannot be studied via iid assumptions since the predictions from current time step affect the environment, further changing subsequent predictions. In Reinforcement Learning (RL), generalization is often studied via environment overfitting, where the learning algorithm itself is overspecialized to the environment. The research community has studied generalization in text-based games by creating environments aimed at training learning algorithm.
This talk will focus on why generalization is hard in RL, and how using language as a latent feature can help improve generalization abilities.

What You’ll Learn: In this talk, you will learn about sequential decision making tasks and the difficulty of benchmarking generalization in such settings

Denys Linkov

ML Team Lead, Voiceflow

Denys leads Machine learning team at Voiceflow focused on building real time NLP offerings. Prior to Voiceflow he was a Senior Cloud Architect at RBC and is actively involved in the Toronto ML Community as a mentor and discussion group lead.

Talk: Three Courses of Real Time NLP

Abstract: When building a real time collaboration tool, users don’t want to wait for insights, so our ML Models need to keep up! In this talk we’ll cover how we built our real time NLP systems at Voiceflow to support different user latency requirements as we build new features. These include models that take 10 ms, 100ms and 10s+ to run, all served through one interface. Each of our users’ data is different, and with 100,000 teams on Voiceflow, we needed to implement a variety of supervised and unsupervised techniques to handle real time training. We’ll wrap up with how we built our MLOps platform to enable fast iterations, A/B testing and efficient monitoring.

What You’ll Learn:
– Building real time ML systems leveraging an event driven architecture
– Balancing performance with cost
– Rapidly iterating on existing models

Ian Yu

Junior Machine Learning Engineer, Groupby Inc.

Ian Yu is a data-centric NLP practitioner and business-oriented consultant. He works as a Junior Machine Learning Engineer at Groupby Inc, an eCommerce solution startup, and Data Science Consultant at StraticaX, a boutique consulting firm. He was a contributor to the PII Detection effort for BLOOM of Big Science.

Talk: Our Quest to Build a Global Privacy Protection Standard
Co-Presenter: Hessie Jones

Abstract: Today we are plagued with our information strewn all over the web. Every time we create an account for applications, social media, or our favorite retail sites, we are opening up avenues to create an ever-ending footprint that lays witness to our intentions, our motivations, behaviors –– all linked to our identities.
The advent of AI/ML opened up opportunities to contextualize the web, and every piece of data in an effort to surface understanding and insights in a more efficient manner to drive better decisions for businesses and governments.
The fallout of this opportunity has also seen the emergence of industries that have profited from the use of personal information: data brokers, advertising platforms, and emerging technology leveraging biometric, facial recognition, location and sensor data. When the biggest companies that collect, aggregate, and contextualize data fail to remedy or change the way they do things, more vulnerabilities are unleashed.
We developed a collective across industry, disciplines: legal, machine learning, business, policy etc in an effort to developing the policy framework, and the NLP specifications for detection, decisioning and transformation.
The ultimate goal is to build a language-independent specification and a set of guidelines and recommendations that can be shared to underserved communities globally (based on their language specifications) so they too can help minimize risks of PII exposure.
This session is a story about our journey and how we’re attempting to tackle this issue, to build standards that can be applied globally.

What You’ll Learn: You’ll learn about data privacy and its increasingly pervasiveness across industry and the difficulty in managing it amidst legislation. Given the state of data, what this collective has done to develop the rules around defining sensitive information and how to effectively remediate the data when it is reused for analysis, research etc.

Hessie Jones

Venture Partner, MATR Ventures

Hessie Joes is a Privacy Technologist, Venture Partner, Strategist, Journalist and Author. She is currently working in Venture Capital and Accelerator. She has 20 years in start-up tech: data targeting, profile and behavioural analytics, AI tech and more recently data privacy and security.Hessie is also a writer at Forbes, GritDaily, a former editorial associate, Towards Data Science, a Cofounding member of MyData Canada, Women in AI Ethics Collective member, Board member with Technology for Good Canada, plus technology mentor and start-up advisor.

Talk: Our Quest to Build a Global Privacy Protection Standard
Co-Presenter: Ian Yu

Abstract: Today we are plagued with our information strewn all over the web. Every time we create an account for applications, social media, or our favorite retail sites, we are opening up avenues to create an ever-ending footprint that lays witness to our intentions, our motivations, behaviors –– all linked to our identities.
The advent of AI/ML opened up opportunities to contextualize the web, and every piece of data in an effort to surface understanding and insights in a more efficient manner to drive better decisions for businesses and governments.
The fallout of this opportunity has also seen the emergence of industries that have profited from the use of personal information: data brokers, advertising platforms, and emerging technology leveraging biometric, facial recognition, location and sensor data. When the biggest companies that collect, aggregate, and contextualize data fail to remedy or change the way they do things, more vulnerabilities are unleashed.
We developed a collective across industry, disciplines: legal, machine learning, business, policy etc in an effort to developing the policy framework, and the NLP specifications for detection, decisioning and transformation.
The ultimate goal is to build a language-independent specification and a set of guidelines and recommendations that can be shared to underserved communities globally (based on their language specifications) so they too can help minimize risks of PII exposure.
This session is a story about our journey and how we’re attempting to tackle this issue, to build standards that can be applied globally.

What You’ll Learn: You’ll learn about data privacy and its increasingly pervasiveness across industry and the difficulty in managing it amidst legislation. Given the state of data, what this collective has done to develop the rules around defining sensitive information and how to effectively remediate the data when it is reused for analysis, research etc.

Suhas Pai
Chief Technology Officer, Bedrock AI

Suhas Pai is the co-founder and CTO of Bedrock AI, a Y-combinator backed fintech startup from Toronto. He heads machine learning research and development at Bedrock AI, where he focuses on representation learning, text ranking, and semantic parsing. He is also the co-chair of the Privacy Working Group at Big Science, the group that built the world’s largest open-source multilingual language model. He leads the NLP group at Aggregate Intellect, where he conducts weekly NLP seminars covering recently published papers. Previously, he was a Senior Software Engineer in the field of information security at IBM, Netherlands.

Talk: NLP in Finance
Co-Presenters: Dr. Ehsan Amjadian, Dr. Patricia Arocena

Abstract: Applying modern natural language methods and technologies in finance has its own unique challenges. Additionally, financial institutions and FinTech companies have a problem set that is sometimes shared with other domains and sometimes unique to the industry. In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

What You’ll Learn: In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

Dr. Patricia Arocena

Head of Innovation Labs, RBC

Patricia Arocena is a Research Director and Head of the Innovation Labs North America at Royal Bank of Canada (RBC), working within the Innovation and Technology organization. She is responsible for understanding emerging technologies in the Generative AI space and helping drive their adoption across the bank. Prior to joining RBC, Patricia held leadership innovation positions at Tier-1 research institutions in Canada, PWC, and other banks where she helped create Data and AI-powered solutions for the Financial Services industry. She earned her PhD in Computer Science and MEng in Computer Engineering from the University of Toronto and has been published in numerous scientific journals. Patricia lives in Toronto and is an avid gardener when there is no snow on the ground.

Talk: NLP in Finance
Co-Presenters: Dr. Ehsan Amjadian, Suhas Pai

Abstract: Applying modern natural language methods and technologies in finance has its own unique challenges. Additionally, financial institutions and FinTech companies have a problem set that is sometimes shared with other domains and sometimes unique to the industry. In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

What You’ll Learn: In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

Dr. Ehsan Amjadian

Head of Data Science, RBC

Dr. Ehsan Amjadian earned his Ph.D. in Deep Learning & Natural Language Processing from Carleton University, Canada. He is published in a variety of additional Artificial Intelligence and Computer Science domains including Recommender Engines, Information Extraction, Computer Vision, and Cybersecurity. Dr. Amjadian is currently the Head of Data Science at the Royal Bank of Canada (RBC), where he has led numerous advanced AI products from ideation to production and has filed multiple patents in the areas of Data Protection, Finance & Climate, and Computer Vision applications to Satellite Images.

Talk: An Efficient Deep Enterprise Search Engine on Private Cloud
Co-Presenters: Syed Salman Ali

Abstract: This talk will address building a high performance and deep search engine in two industrial settings. One where an existing search engine needs to be consulted and the other standalone. We will discuss how to achieve sub-second search in highly economical settings without model compression.
Similarity search using natural language queries finds its application in a variety of enterprise circumstances. From finding relevant websites based on a phrase to finding stored images or videos, the importance of such a common need cannot be overstated. In addition to today’s massive amount of data to search through, a critical part of the challenge of finding the most relevant answers is the need for the algorithm to be intelligent enough to sufficiently understand the intent behind a query, not dissimilar to the way human perceives semantics of the text. The problem of building a high-quality search engine, hence, effectively transforms into two subproblems:
1) how to effectively capture the semantics of textual query into fixed sized continuous vector representation and
2) how to efficiently locate relevant data against an abundance of datapoints. In this talk we walk you through a progressive set of solutions to the aforementioned problems.
At the end of this talk, the audience will have better understanding how real-world search engine works in various industrial scenarios and what makes them capable to retrieve relevant data efficiently given natural language queries, despite the ever-growing model sizes.

What You’ll Learn:
– How modern search engines work
– How to build an end-to-end deep search engine
– How to deploy deep search engines on low-compute or capped-compute environments
– How to deploy deep search engines on private cloud

Talk: NLP in Finance
Co-Presenters: Dr. Patricia Arocena, Suhas Pai

Abstract: Applying modern natural language methods and technologies in finance has its own unique challenges. Additionally, financial institutions and FinTech companies have a problem set that is sometimes shared with other domains and sometimes unique to the industry. In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

What You’ll Learn: In this panel you will gain valuable insights into how the recent advances in AI and specifically Natural Language Processing are shaping the future of financial industry. We will cover areas related to Privacy, Large Language Models, Compute Infrastructure, and Innovation within the Financial Services Sector and FintTech.

Syed Salman Ali

Data Science Lead, RBC

Salman Ali is the Data Science Lead at the Royal Bank of Canada, leading various cutting-edge engineering initiatives. He received his MASc from the University of Regina in Electronic Systems Engineering. Throughout the past decade he has worked in a variety of machine learning engineering and natural language engineering roles. These include tasks in Neural Machine Translation, Information Retrieval, and various Natural Language Understanding subtasks. In addition to NLP and various fields in machine learning, he has in-depth expertise in Brain Computer Interface.

Talk: An Efficient Deep Enterprise Search Engine on Private Cloud
Co-Presenter: Syed Salman Ali

Abstract: This talk will address building a high performance and deep search engine in two industrial settings. One where an existing search engine needs to be consulted and the other standalone. We will discuss how to achieve sub-second search in highly economical settings without model compression.
Similarity search using natural language queries finds its application in a variety of enterprise circumstances. From finding relevant websites based on a phrase to finding stored images or videos, the importance of such a common need cannot be overstated. In addition to today’s massive amount of data to search through, a critical part of the challenge of finding the most relevant answers is the need for the algorithm to be intelligent enough to sufficiently understand the intent behind a query, not dissimilar to the way human perceives semantics of the text. The problem of building a high-quality search engine, hence, effectively transforms into two subproblems:
1) how to effectively capture the semantics of textual query into fixed sized continuous vector representation and
2) how to efficiently locate relevant data against an abundance of datapoints. In this talk we walk you through a progressive set of solutions to the aforementioned problems.
At the end of this talk, the audience will have better understanding how real-world search engine works in various industrial scenarios and what makes them capable to retrieve relevant data efficiently given natural language queries, despite the ever-growing model sizes.

What You’ll Learn:
– How modern search engines work
– How to build an end-to-end deep search engine
– How to deploy deep search engines on low-compute or capped-compute environments
– How to deploy deep search engines on private cloud

Jekaterina Novikova

Director of ML, Winterlight Labs

Jekaterina Novikova is a researcher with an established international profile at the intersection of Language Technology and Machine Learning, for interdisciplinary applications including Healthcare, Natural Language Generation, Spoken Dialogue Systems and Human-Robot Interaction. As a Director of Machine Learning at Winterlight Labs, she is leading the company’s research efforts and manages a team of research scientists and ML engineers. Jekaterina is recognized with the “Industry Icon” and “30 Influential Women Advancing AI in Canada” awards, and her work was notified with best paper nominations at multiple conferences.

Talk: Interpretability and Robustness of Transformer Models in Healthcare

Abstract: Understanding robustness of BERT models when they are used in healthcare settings is important for both developing better models and for understanding their capabilities and limitations. In this talk, I will speak about the robustness and sensitivity of BERT models predicting Alzheimer’s disease from text. I will also show how behavioural tests can be used to improve interpretability and generalizability of BERT models detecting depression.

What You’ll Learn: You will learn about the importance of model interpretability in healthcare settings and how to evaluate it in order to improve generalizability and robustness.

Annie En-Shiun Lee

Assistant Professor (Teaching Stream), Computer Science, University of Toronto

Anne En-Sjiun Lee is an Assistant Professor (Teaching Stream) for the Computer Science Department at the University of Toronto. She received her PhD from the University of Waterloo in 2014 under the supervision of Professor Andrew K. C. Wong and Daniel Stashuk from the Centre of Pattern Intelligence and Machine Intelligence. She has also been a visiting researcher at the Fields Institute (invited by Nancy Reid) and CUHK (invited by K. S. Leung and M. H. Wong) as well as a research scientist at VerticalScope and Stradigi AI.

Talk: Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation?

Abstract: What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages? We conduct a thorough empirical experiment in 10 languages to ascertain this, considering five factors:
(1) the amount of fine-tuning data, (2) the noise in the fine-tuning data,
(3) the amount of pre-training data in the model,
(4) the impact of domain mismatch, and
(5) language typology. In addition to yielding several heuristics, the experiments form a framework for evaluating the data sensitivities of machine translation systems. While mBART is robust to domain differences, its translations for unseen and typologically distant languages remain below 3.0 BLEU. In answer to our title’s question, mBART is not a low-resource panacea; we therefore encourage shifting the emphasis from new models to new data.

What You’ll Learn: What can pre-trained multilingual sequence-to-sequence models like mBART contribute to translating low-resource languages? We try to to answer this through empirical experiments on 10 different languages.

Workshop: Pre-Trained Multilingual Sequence-to-Sequence Models for NMT: Tips, Tricks and Challenges

Abstract: Neural Machine Translation (NMT) has seen a tremendous spurt of growth in less than ten years, and has already entered a mature phase. Pre-trained multilingual sequence-to-sequence (PMSS) models, such as mBART and mT5, are pre-trained on large general data, then fine-tuned to deliver impressive results for natural language inference, question answering, text simplification and neural machine translation. This tutorial presents
1) An Introduction to Sequence-to-Sequence Pre-trained Models,
2) How to adapt pre-trained models for NMT,
3) Tips and Tricks for NMT training and evaluation,
4) Challenges/Problems faced when using these models. This tutorial will be useful for those interested in NMT, from a research as well as industry point of view.

What You’ll Learn: This tutorial will give an overview of Pre-trained Sequence-to-Sequence Multilingual Models, tips, tricks and frameworks that can be used to adapt these models for NMT, the challenges faced while using these models and how to overcome them.

Karthik Ramakrishnan

President & Co-Founder, Armilla AI

Karthik is the President and Co-Founder at Armilla AI. Armilla is a automated testing and quality assurance platform ML systems. Previously he was an executive at Element AI, Deloitte Canada and a serial entrepreneur.

Talk: Quality Assurance of NLP Systems
Co-Presenter: Rahm Hafiz

Abstract: Current testing and validation practices for evaluating the robustness of NLP systems rely primarily on traditional performance metrics. While important, these techniques reveal a narrow interpretation of “robustness” – glossing over critical issues such as fairness, explainability, or business requirements, which are typically use case specific. Real-world performance often shows large degradation compared to internal testing and ship without fully considering bias or fairness issues.
Accordingly, this talk helps bridge the gap between current testing practices and new approaches.
Specifically, we explore applying a battery of automated bias, fairness, explainability, performance and data quality tests to a wide range of unstructured use cases, including large transformer-based NLP models, to identify unexpected scenarios, adversarial examples and edge cases where distinct areas of a model are performing poorly.

What You’ll Learn: Best practices to ensure the robustness and quality of your NLP systems and models.

Rahm Hafiz

Co-Founder, Armilla AI

Talk: Quality Assurance of NLP Systems
Co-Presenter: Karthik Ramakrishnan

Abstract: Current testing and validation practices for evaluating the robustness of NLP systems rely primarily on traditional performance metrics. While important, these techniques reveal a narrow interpretation of “robustness” – glossing over critical issues such as fairness, explainability, or business requirements, which are typically use case specific. Real-world performance often shows large degradation compared to internal testing and ship without fully considering bias or fairness issues.
Accordingly, this talk helps bridge the gap between current testing practices and new approaches.
Specifically, we explore applying a battery of automated bias, fairness, explainability, performance and data quality tests to a wide range of unstructured use cases, including large transformer-based NLP models, to identify unexpected scenarios, adversarial examples and edge cases where distinct areas of a model are performing poorly.

What You’ll Learn: Best practices to ensure the robustness and quality of your NLP systems and models.

Brendan M McKenna

ML Field Engineer, ContinualAI

Brendan is a ML Field Engineer at Continual.ai, the operational AI platform for the modern data stack. Brendan has been helping customers across industries architect and implement ML solutions for half a decade. Before Continual, he was a Solutions Engineer at Cloudera and Oracle. He was also on the founding team at the autonomous toy company Bots Alive, which was acquired by Dash Robotics in 2017.

Workshop: Operationalizing State of the Art Language Models

Abstract: The remarkable efficiency and accuracy improvements achieved by Transformer models are an impressive leap forward. Researchers and practitioners are racing to discover new applications across different industries. Pre-trained models released for public use by organizations such as OpenAI, Google, Meta, and others are untapped potential for many organizations. Yet, massive operational barriers stand in the way of data science teams as they attempt to capitalize on the latest and greatest in NLP.
In this talk, we’ll describe the operational barriers hindering teams from taking state of the art models from the lab to production and how modern operational AI platforms allow businesses to take advantage of the exploding NLP ecosystem. We’ll also roll up our sleeves and step through an example of using BERT or GPT-2 to classify customer complaints into product categories.

What You’ll Learn: How to operationalize pre-trained transformer models for text classification

Amanda Milberg

Data Scientist, Dataiku

Amanda is a Data Scientist at Dataiku with a strong interest in NLP and AI / Machine Learning business solutions. She has previous academic and professional experience with Java, Python, C, Neo4j, SQL, HTML / CSS, Dash, and JavaScript and a bachelor’s degree in Computer Science and Mathematics from Colgate University. Amanda has a proven track record of assisting large institutions in business transformation efforts in the advanced analytics space and an innate ability to explain deep technical concepts to a broad audience. This enables both business and technical individuals to digest and understand complex topics.

Workshop: Natural Language Processing in Plain English

Abstract: This session will provide an overview of natural language processing in plain english. We will cover useful text pre-processing techniques as well as common difficulties a machine faces when attempting to transcribe and interpret human language. We will then highlight the advanced NLP techniques that have been developed in response to these challenges as well as cutting edge technologies in the field such as Word2Vec and BERT. Finally, we will highlight common NLP use cases across industries and where there are opportunities to add textual analysis to your organization.

What You’ll Learn: The objective of the session is to educate users on how to harness insights from unstructured data and how this untapped data can enhance business processes and automate decision making.

Sign Up for TMLS 2023 News Updates

Who Attends

Attendees
0 +
Data Practitioners
0 %
Researchers/Academics
0 %
Business Leaders
0 %

2023 Event Demographics

Highly Qualified Practitioners*
0 %
Currently Working in Industry*
0 %
Attendees Looking for Solutions
0 %
Currently Hiring
0 %
Attendees Actively Job-Searching
0 .0%

2023 Technical Background

Expert
19.2%
Advanced
49.8%
Intermediate
24.1%
Beginner
6.9%

2023 Attendees & Thought Leadership

Attendees
0 +
Speakers
0 +
Company Sponsors
0 +

Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.

Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.

Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.

Ignite what is an Ignite Talk?

Ignite is an innovative and fast-paced style used to deliver a concise presentation.

During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.

The result is a fun and engaging five-minute presentation.

You can see all our speakers and full agenda here

Get our official conference app
For Blackberry or Windows Phone, Click here
For feature details, visit Whova