Join us for the Finance & Insurance Machine Learning Community’s Annual Gathering
8 speakers will explore applications of Machine Learning from both the business and technical areas of expertise.
Attendees will have opportunities to meet with both academic researchers and industrial parties active in the Financial sector in order to gain new perspectives from each other’s scope of work.
The Micro-Summit includes:
Join this new initiative to help push the AI community forward.
Co-Founder and CEO, SR.ai
CTO, Lydia.ai
Head of I3 Investments, Guardian Capital LP
Talk: Eye of the Tiger: The Transformative Impact of AI on Investment Management
Senior Portfolio Manager and Engineer, Guardian Capital LP
Talk: Eye of the Tiger: The Transformative Impact of AI on Investment Management
Co-Presenter: Srikanth Iyer
Co-Founder and CEO, Superwise
Talk: The Uncommon Monitoring Challenges Unique to Fintech
Head of Global Solutions Architecture, Arize AI
Talk: Best Practices in ML Observability for Fraud
Machine Learning Researcher, Lydia.ai
Talk: What We Have Learned From Predicting Health Status Using Wearables?
Co-Presenter: Elham Karami
Senior Data Scientist, Lydia.ai
Talk: What We Have Learned From Predicting Health Status Using Wearables?
Co-Presenter: Hanieh Arjmand
Applied Data Scientist, Lydia.ai
Talk: How to Create Systematically Sampled Datasets that Solve Business Problems
Senior Data Scientist, Lydia.ai
Talk: How to Create Systematically Sampled Datasets that Solve Business Problems
Co-Presenter: Spark Tseung
Senior Data Scientist, Scotiabank
Talk: Feature Selection Using Causal Inference for Predicting Macroeconomic Factors
Co-Presenter: Baris Kaya
Director, Data Science – Lead Data Scientist, Global Banking & Markets, Scotiabank
Talk: Feature Selection Using Causal Inference for Predicting Macroeconomic Factors
Co-Presenter: Nima Safaei
Evangelist, ClearML
Talk: The Importance of Experiment Tracking and Data Traceability
Director of Product Management, Munich Re
Talk: Privacy Law 101 for ML in Insurance
Co-Presenter: Margaret Pak
Partner, Walker Sorensen LLP
Talk: Privacy Law 101 for ML in Insurance
Co-Presenter: Michael Maunder
Global Head, Customer Sustainability / ESG, Rockwell Automation
Talk: The As-yet Untapped Multi-Trillion Dollar Opportunity: How AI/ML can help Secure the Future of the Human Race
Global Solutions Director, Financial Services & Insurance, Dataiku
Co-Presenter: David Behar
Workshop: GLM for Insurance Claims in an Agile Analytics Platform
Co-Presenter: John McCambridge
Associate Applied Machine Learning Specialist, Vector Institute
Workshop: Deep Learning for Time Series Forecasting with Applications in Finance
Co-Presenter: Yan Zhang
Lead Data Scientist, BMO
Workshop: Deep Learning for Time Series Forecasting with Applications in Finance
Co-Presenter: John Jewell
Insurance Technology Analyst and Consultant, Celent
Workshop: ML Trends, Opportunities, and Challenges in Insurance
Lead Data Scientist, SR.ai
Paul has a background in Physics Engineering and Finance. He was part of Deloitte Canada Omnia AI, for 3.5 years, where he was leading the NLP practice and building Omnia AI’s NLP Accelerator. He is now the Lead Data Scientist at SR.ai, a startup using NLP to build tools for investors to analyse companies’ ESG impact.
Talk: Integrating NLP in Finance
Abstract: While finance relies heavily on structured numerical data, dealing with unstructured text data is how many finance practitioners still spend a significant portion of their time. Natural Language Processing (NLP), which is a subfield of Artificial Intelligence that focuses on human language, can help automate these processes and even unlock new sources of data.
You may be wondering how to integrate NLP into your processes?
This presentation will show you the different steps, elements and skill sets required to build and use NLP models for real applications.
What You’ll Learn: This presentation will show you the different steps, elements and skill sets required to build and use NLP models for real applications.
Professor, University of Toronto
Talk: Math, Technology and Social Science
Abstract: The XXI Century will signify the merger of science and humanities, with technology as the catalyst. This talk will review some recent developments in this, where the environment, math, finance, economics and regulation are contributing to innovation.
What You’ll Learn: You will learn how technology and math are working with social science and regulation
Head of I3 Investments, Guardian Capital LP
Srikanth (Sri) Iyer is Lead Portfolio Manager and Managing Director, and Head of i3 Investments™ for Guardian Capital LP (GCLP). He joined GCLP in 2001 to lead the development and implementation of GCLP’s proprietary systematic strategies. Subsequently, this led to a creation of the i3 Investments™ team, which today manages a set of Global, International, US and Canadian-based solutions.
These solutions employ a differentiated process that combines relative, intrinsic, data sciences and artificial intelligence across long only and alternative approaches. As the primary lead of the team, Sri applies his 25-year plus experience managing quantitative investments and risk management to guide the overall development and implementation of systematic strategies for the firm.
Talk: Eye of the Tiger: The Transformative Impact of AI on Investment Management
Co-Presenter: Adam Cilio
Abstract: In investment management, early price discovery is the cornerstone of alpha, and the analytical techniques made available through Machine Learning (ML) and Artificial Intelligence (AI) have revolutionized this process. However despite an ever increasing amount of more readily available structured and unstructured data, finance and investment management is still a domain fraught with noise. This is why the application of ML to investment management has its own set of challenges that are different than other areas. The solution to this is not always a more precise model, or just more data. What is needed is the real-world experience of domain experts, data scientists, and engineers. It is the synergy between AI and Human Intelligence (HI) that can set the ground rules for the future of investing.
Senior Portfolio Manager and Engineer, Guardian Capital LP
Talk: Eye of the Tiger: The Transformative Impact of AI on Investment Management
Co-Presenter: Srikanth Iyer
Abstract: In investment management, early price discovery is the cornerstone of alpha, and the analytical techniques made available through Machine Learning (ML) and Artificial Intelligence (AI) have revolutionized this process. However despite an ever increasing amount of more readily available structured and unstructured data, finance and investment management is still a domain fraught with noise. This is why the application of ML to investment management has its own set of challenges that are different than other areas. The solution to this is not always a more precise model, or just more data. What is needed is the real-world experience of domain experts, data scientists, and engineers. It is the synergy between AI and Human Intelligence (HI) that can set the ground rules for the future of investing.
Co-Founder and CEO, Superwise
Oren is the co-founder and CEO of Superwise, the leading platform for model observability. With over 15 years of experience leading the development, deployment, and scaling of ML products, Oren is an expert ML practitioner specializing in MLOps tools and practices. Previously, Oren managed machine learning activities at Intel’s ML center and operated a machine learning boutique consulting agency helping leading tech companies such as Sisense, Gong, AT&T, and others, to build their machine learning-based products and infrastructure.
Talk: The Uncommon Monitoring Challenges Unique to Fintech
Abstract: Fintech is one of the fastest industries to adopt ML and achieve high-scale operational AI. This Fintech – ML boom also brings some uncommon monitoring challenges unique to the fintech industry. In this session, I’ll walk you through the challenges of unbalanced use cases, adversarial attacks, and delayed feedback for fintech companies and share our best practices on how to overcome them.
What You’ll Learn:
The challenges of monitoring for:
– Unbalanced use cases
– Adversarial attacks
– Delayed feedback
– FinTech use-cases and strategies to put in place to safeguard ML in production
Head of Global Solutions Architecture, Arize AI
Gabe Barcelos is a founding engineer at Arize AI, a machine learning observability company, specializing in ML frameworks and data systems. Prior to Arize, he led foundational data pipeline initiatives and an industry-recognized customer service team at Adobe, TubeMogul, and Saildrone. From autonomous research drones to digital advertising bidding and analytics systems, Gabe strives to infuse a data-driven focus and customer-centric mindset into programs he oversees. In his free time, you’ll find Gabe cooking, skiing, or exploring new trails with his wife and dog (who’s a very good girl). He holds a bachelor degree in chemical engineering from UC Berkeley.
Talk: Best Practices in ML Observability for Fraud
Abstract: While finance relies heavily on structured numerical data, dealing with unstructured text data is how many finance practitioners still spend a significant portion of their time. Natural Language Processing (NLP), which is a subfield of Artificial Intelligence that focuses on human language, can help automate these processes and even unlock new sources of data.
You may be wondering how to integrate NLP into your processes?
This presentation will show you the different steps, elements and skill sets required to build and use NLP models for real applications.
What You’ll Learn: You’ll learn effective ways to measure model performance in your fraud model and how to use proxy metrics, such as model drift, in the event of delayed actuals. You’ll also learn how to use performance tracing techniques to identify areas where your model is underperforming and how to actively improve your model. Lastly, you’ll learn how to use key explainability metrics to help root cause problem areas with an increased understanding of prediction impact for each feature.
Machine Learning Researcher, Lydia.ai
Hanieh Arjmand is a Machine Learning Researcher at Lydia.ai where she focuses on discovering and applying the best machine learning techniques to healthcare and insurance problems to help insurers use machine learning to protect more people.
Talk: What We Have Learned From Predicting Health Status Using Wearables?
Co-Presenter: Elham Karami
Abstract: Covid-19 has accelerated the adaptation and integration of wearable data into hospitals’ patient care programs in order to monitor health conditions and disease progression/prognosis. Many insurance companies have followed this trend and are using wearable-based health tracking for risk analysis and incentive programs. More importantly, using wearables in insurance could lead to a more customer-centric model with continuous risk assessment and personalized disease prevention programs and allow for expanding insurability for customers with pre-existing conditions who otherwise would be declined.
Although utilizing wearable data in the insurance field has high potential and is promising, data obtained from wearable devices usually lacks standardization (e.g., data type, units, calibration, accuracy,…) and significant effort is needed for standardized measurement and data transformation as well as model validation processes. In this talk, we will go through our experience of overcoming some of these challenges.
What You’ll Learn: In this talk, you will be introduced to the challenges that we have faced when using wearable data for predicting health status, and you will know about our solutions to overcome some of these challenges.
Senior Data Scientist, Lydia.ai
Talk: What We Have Learned From Predicting Health Status Using Wearables?
Co-Presenter: Hanieh Arjmand
Abstract: Covid-19 has accelerated the adaptation and integration of wearable data into hospitals’ patient care programs in order to monitor health conditions and disease progression/prognosis. Many insurance companies have followed this trend and are using wearable-based health tracking for risk analysis and incentive programs. More importantly, using wearables in insurance could lead to a more customer-centric model with continuous risk assessment and personalized disease prevention programs and allow for expanding insurability for customers with pre-existing conditions who otherwise would be declined.
Although utilizing wearable data in the insurance field has high potential and is promising, data obtained from wearable devices usually lacks standardization (e.g., data type, units, calibration, accuracy,…) and significant effort is needed for standardized measurement and data transformation as well as model validation processes. In this talk, we will go through our experience of overcoming some of these challenges.
What You’ll Learn: In this talk, you will be introduced to the challenges that we have faced when using wearable data for predicting health status, and you will know about our solutions to overcome some of these challenges.
Applied Data Scientist, Lydia.ai
Spark Tseung is an Applied Data Scientist at Lydia.ai where he focuses on building frameworks for actuarial and underwriting validation to help insurers use machine learning to protect more people.
Spark is working towards his PhD in Statistics and specializes in the application of machine learning methods in Property & Casualty loss modelling and risk selection.
Spark is a Fellow of the Society of Actuaries and Chartered Enterprise Risk Analyst.
Talk: How to Create Systematically Sampled Datasets that Solve Business Problems
Co-Presenter: Hadi Moghadas
Abstract: With the increasing size, variety and availability of data, data scientists have more opportunities than ever to leverage data-hungry models for solving real problems. These opportunities come with the challenge of needing to build datasets that are reliable, transparent and reproducible out of messy and variable data. In addition, datasets need to both be relevant to business problems at hand while also being flexible enough to adapt to other use cases. In this talk, we will provide a case study of designing, implementing, testing and utilizing a systematic process of sampling from large electronic health record databases with application in insurance risk scoring. We will discuss how business logic, technical requirements, and end users’ needs are all incorporated in this process, and how our lessons learned can be extended to other machine learning problems.
What You’ll Learn: We will provide a case study of designing, implementing, testing and utilizing a systematic process of sampling from large electronic health record databases with application in insurance risk scoring. We will discuss how business logic, technical requirements, and end users’ needs are all incorporated in this process, and how our lessons learned can be extended to other machine learning problems.
Senior Data Scientist, Lydia.ai
Hadi is a senior data scientist at Lydia.ai where he focuses on the discovery and application of AI solutions to the complex challenges of healthcare and insurance. He has extensive expertise in medical engineering, electrical engineering and computer science. Prior to joining Lydia.ai, Hadi was a postdoctoral research fellow at the University of Toronto’s Sunnybrook Hospital. His research focused on personalized medicine based on forecasting treatment outcomes. Before moving to Canada, Hadi was a university professor and co-founder of a successful tech startup in Iran. Hadi holds a BSc in Medical Engineering, an MSc and a Ph.D. in Electrical Engineering. Currently, he is a senior member of the IEEE and mentors young researchers and engineers.
Talk: How to Create Systematically Sampled Datasets that Solve Business Problems
Co-Presenter: Spark Tseung
Abstract: With the increasing size, variety and availability of data, data scientists have more opportunities than ever to leverage data-hungry models for solving real problems. These opportunities come with the challenge of needing to build datasets that are reliable, transparent and reproducible out of messy and variable data. In addition, datasets need to both be relevant to business problems at hand while also being flexible enough to adapt to other use cases. In this talk, we will provide a case study of designing, implementing, testing and utilizing a systematic process of sampling from large electronic health record databases with application in insurance risk scoring. We will discuss how business logic, technical requirements, and end users’ needs are all incorporated in this process, and how our lessons learned can be extended to other machine learning problems.
What You’ll Learn: We will provide a case study of designing, implementing, testing and utilizing a systematic process of sampling from large electronic health record databases with application in insurance risk scoring. We will discuss how business logic, technical requirements, and end users’ needs are all incorporated in this process, and how our lessons learned can be extended to other machine learning problems.
Senior Data Scientist, Scotiabank
Nima has a Ph.D. in system and industrial engineering with a background in Applied Mathematics. He held a postdoctoral position at C-MORE Lab (Center for Maintenance Optimization & Reliability Engineering), University of Toronto, Canada, working on machine learning and Operations Research (ML/OR) projects in collaboration with various industry and service sectors. He was with Department of Maintenance Support and Planning, Bombardier Aerospace with a focus on ML/OR methods for reliability/survival analysis, maintenance, and airline operations optimization. Nima is currently with Data Science & Analytics (DSA) lab, Scotiabank, Toronto, Canada, as senior data scientist. He has more than 40 peer-reviewed articles and book chapters published in top-tier journals as well as one published patent. He also invited to present his findings in some ML top conferences such as GRAPH+AI 2020, NVIDIA GTC 2020/2021, ICML 2021 and TMLS 2021.
Talk: Feature Selection Using Causal Inference for Predicting Macroeconomic Factors
Co-Presenter: Baris Kaya
Abstract: Forecasting of macroeconomic factors, e.g., inflation, unemployment rate, etc. is fundamental to monetary policy. In practice, however, these factors are affected by many exogenous variables and therefore the forecasting of such time series is faced with competing goals: accuracy and theoretical consistency. In such a situation, the feature selection becomes an indispensable step when a machine learning model is employed. As per the Law of Parsimony of ‘Occam’s Razor’, the best explanation to a problem is that which involves the fewest possible features. Theoretical consistency is conceptually tied to “explainability” which is assessed through tests of prediction sensitivity to the scope of the features. Causal Inference (CI) is a vital tool for producing insightful explainability. In this research work, a CI method is applied for feature selection to forecast “employment rate” using a classical supervised model. The impact of CI method on the performance of the forecasting model is described using the statistical metrics. Besides, the effect of CI-based feature selection on the explainability power of the model will be discussed.
What You’ll Learn: The close relationship between Causal Inference and Explainability
Director, Data Science – Lead Data Scientist, Global Banking & Markets, Scotiabank
Talk: Feature Selection Using Causal Inference for Predicting Macroeconomic Factors
Co-Presenter: Nima Safaei
Abstract: Forecasting of macroeconomic factors, e.g., inflation, unemployment rate, etc. is fundamental to monetary policy. In practice, however, these factors are affected by many exogenous variables and therefore the forecasting of such time series is faced with competing goals: accuracy and theoretical consistency. In such a situation, the feature selection becomes an indispensable step when a machine learning model is employed. As per the Law of Parsimony of ‘Occam’s Razor’, the best explanation to a problem is that which involves the fewest possible features. Theoretical consistency is conceptually tied to “explainability” which is assessed through tests of prediction sensitivity to the scope of the features. Causal Inference (CI) is a vital tool for producing insightful explainability. In this research work, a CI method is applied for feature selection to forecast “employment rate” using a classical supervised model. The impact of CI method on the performance of the forecasting model is described using the statistical metrics. Besides, the effect of CI-based feature selection on the explainability power of the model will be discussed.
What You’ll Learn: The close relationship between Causal Inference and Explainability
CEO, Aporia
Liran Hason is the Co-Founder and CEO of Aporia, a full-stack ML observability platform that empowers businesses to trust their AI and use it responsibly. Prior to founding Aporia, Liran was an ML Architect at Microsoft-acquired Adallom, and later an investor at Vertex Ventures. Liran created Aporia after seeing first-hand the effects of AI without guardrails.
Talk: Responsible AI in Finance
Abstract: How can we leverage our AI to make business-critical decisions while ensuring it is used responsibly and ethically? To build trust in AI and ensure positive outcomes, it is time to move beyond defining Responsible AI and begin putting these principles into practice. In this session, Liran will discuss how financial institutions can reap the rewards of AI while remaining compliant and fair for their customers and society.
What You’ll Learn:
– A brief overview of Responsible AI & its challenges in Finance
– 4 Core Pillars of RAI
– How to Actually Put RAI into Practice with 4 Practical Quick Wins for MLOps teams working with Finance-related use cases
Evangelist, ClearML
Victor started out as a Machine Learning engineer and is currently spreading the word about the importance of MLops to anyone who’s willing to listen.
Talk: The importance of Experiment Tracking and Data Traceability
Abstract: Data scientists are usually not trained to go further than their analyses, however in order to get to a more mature AI infrastructure that can support more models in production, additional steps will have to be taken. Experiment management and Data versioning are very important first steps in the direction of the “MLops” way of working. Done properly, they can serve as a foundation to build more advanced systems on top, such as pipelines, remote workers and advanced automation. When a data scientist can include this way of working into their day-to-day, they have a very powerful tool in hand to raise the success rate of their models and analyses.
What You’ll Learn: Learn the importance of experiment management and data versioning in any data analysis and AI workflow. Learn some of the advantages they bring to the table and how easy and painless it can be to add to your current workflow. Learn how applying these principles can lead to more complex systems that fall under the umbrella of “MLOps”.
Director of Product Management, Munich Re
Michael Maunder started with Munich Re in 2009, using his Health Science background to assess the risk of Life & Health insurance applications. His role has evolved from traditional underwriting to being responsible for identifying high-impact partnerships & opportunities in the Fintech and Insurtech space, and collaborating with actuaries, doctors, data scientists and underwriters to bring new products & concepts to market. Outside of Munich Re, he is the President of the Underwriters Association of Toronto and a digital consultant for On The Risk magazine.
Talk: Privacy Law 101 for ML in Insurance
Co-Presenter: Margaret Pak
Abstract: How are privacy law, machine learning and the Life & Health insurance industry connected? We will provide real-world examples of using digital data, AI and ML in insurance and discuss some challenges and solutions to innovating in the L&H insurance space.
What You’ll Learn: Innovation in the Life & Health insurance industry; privacy law with respect to consent when implementing digital data and ML techniques to build new products and practices.
Partner, Walker Sorensen LLP
Margaret Pak is a partner at Walker Sorensen and practices corporate and commercial law. Margaret’s practice focuses on insurance related matters and she has advised Board members, executives and founders of domestic and international companies on insurance regulatory law, corporate law and specific transactions. Margaret also has particular experience in privacy law, and has advised extensively on digital privacy and data governance issues that involve PIPEDA, CASL and the Freedom of Information Act.
Talk: Privacy Law 101 for ML in Insurance
Co-Presenter: Michael Maunder
Abstract: How are privacy law, machine learning and the Life & Health insurance industry connected? We will provide real-world examples of using digital data, AI and ML in insurance and discuss some challenges and solutions to innovating in the L&H insurance space.
What You’ll Learn: Innovation in the Life & Health insurance industry; privacy law with respect to consent when implementing digital data and ML techniques to build new products and practices.
Global Head, Customer Sustainability / ESG, Rockwell Automation
Andrea leads intrapreneurial teams in scaling new technology solutions and entering new markets and identifying high-value strategic investment opportunities.
Andrea is Head of Customer Sustainability at Rockwell Automation, the world’s largest industrial automation company, with responsibility for advancing innovation in sustainability for Rockwell’s customers, which include Fortune 100 companies in energy and manufacturing, representing millions of employees and hundreds of billions of dollars in annual revenues. Andrea is a passionate evangelist for the role AI/ML can play in dramatically improving the sustainability of the industrial sector.
Across her 18 years’ experience in leading technology innovation, in a career spanning Europe, Asia, and the Americas, Andrea has held multiple senior executive roles focused on applying advanced technologies to solve the sustainability challenge. She has served as co-founder and entrepreneur in smart grid consulting, global lead in the world’s largest engineering services firm in the energy sector, and senior director at a major utility.
As well as her Fulbright Doctorate in sustainable energy systems, Andrea holds a B.A. and M.Sci. in Aeronautical and Aerospace Engineering from Madrid Polytechnic, and certification in Digital Business Strategy from MIT Sloan School of Management.
Talk: The As-yet Untapped Multi-Trillion Dollar Opportunity: How AI/ML can help Secure the Future of the Human Race
Abstract: Recently, BlackRock’s CEO announced that his firm (which holds over USD $10 Trillion in assets under management, or AUM) now has a core goal of investing with environmental sustainability in mind. Goldman Sachs, with USD $2.5 Trillion AUM, has now made “”sustainable finance”” core to its business. And Canada’s former central bank governor, Mark Carney, who now leads “Transition Investing” at the USD $690 Billion AUM fund Brookfield Asset Management, recently led the formation of a coalition of the world’s largest bank and fund managers, representing an incredible $130 Trillion in AUM, to commit to addressing climate change.
The stakes are high: Unimaginable sums of money, and the future of humanity, are in play. Where does AI/ML come in? Sustainability is incredibly complex, involving billions of moving parts, decisions, industrial processes, the energy keeping the lights on and allowing us to call in remotely for this conference, and all our global supply chains for food, materials, and fuel. It starts at the edge, where exabytes of data are flowing from real time sensors and controls in factories and power plants, which aggregate up to the top-level decision makers in companies, which aggregate up to the massive funds that hold portfolios of those companies, and to government regulators and policy makers.
Where to begin? In this session, we’ll explore the top 3 needs and opportunities for ML/AI to catalyze change toward more sustainable companies, economies, and societies.
What You’ll Learn:
– The most significant market trends in the multi-Trillion dollar shift to the ESG economy
– The primary industrial sectors involved in the ESG transition
– 3 key opportunities for AI/ML to create and capture huge value in the ESG transition
Global Solutions Director, Financial Services & Insurance, Dataiku
John has extensive experience as a manager, consultant, developer, and designer at the intersection of financial services, technology, and analytics. As Dataiku’s Global Director for Financial Services and Insurance Solutions they ensure the firm offers powerful solutions and deep empathy to all our clients, allowing us to operate as thought-partners.
Workshop: GLM for Insurance Claims in an Agile Analytics Platform
Co-Presenter: David Behar
Abstract: As the insurance competitive landscape intensifies with the entry of new digital native players, the emergence of new risks, and growing consumer volatility, reinforcing efficiency and improving scalability in pricing strategies becomes increasingly vital for insurers.
Leveraging Generalized Linear Models (GLMs) for consumer claims modeling is a common market practice approach with a deep, rich, and proven track record. In this session we will walk through how actuaries can benefit from training GLMs in an enterprise-grade, agile and fully-featured data science and analytics platform, by leveraging a powerful visual environment with the ability to conduct extensive Exploratory Data Analysis, and simple options push models to production through a simple API deployment interface.
What You’ll Learn:
Some topics covered in this session include:
Enable real-time scoring by easily deploying finalized models for use in other systems internally or externally.
Develop real-time experimentation on model results using the powerful & interactive modeling application
Rapidly develop and detailed analytic insights using powerful exploratory data tools
Senior Data Scientist, Dataiku
David is a data scientist with a strong background in financial engineering and statistics. With experience in ad tech, banking, and financial markets, he developed state-of-the-art risk management tools, a real-time volatility calibration engine, and automated options trading algorithms. At Dataiku, he is in charge of creating Solutions for the Financial Services and Insurance sector, ranging from fraud to risk modeling and marketing, to help our customers increase the value they get out of their data.
Workshop: GLM for Insurance Claims in an Agile Analytics Platform
Co-Presenter: John McCambridge
Abstract: As the insurance competitive landscape intensifies with the entry of new digital native players, the emergence of new risks, and growing consumer volatility, reinforcing efficiency and improving scalability in pricing strategies becomes increasingly vital for insurers.
Leveraging Generalized Linear Models (GLMs) for consumer claims modeling is a common market practice approach with a deep, rich, and proven track record. In this session we will walk through how actuaries can benefit from training GLMs in an enterprise-grade, agile and fully-featured data science and analytics platform, by leveraging a powerful visual environment with the ability to conduct extensive Exploratory Data Analysis, and simple options push models to production through a simple API deployment interface.
What You’ll Learn:
Some topics covered in this session include:
Enable real-time scoring by easily deploying finalized models for use in other systems internally or externally.
Develop real-time experimentation on model results using the powerful & interactive modeling application
Rapidly develop and detailed analytic insights using powerful exploratory data tools
Associate Applied Machine Learning Specialist, Vector Institute
John Jewell is an Applied Machine Learning Specialist at Vector Institute and Graduate Researcher at Western University. In both research and industry settings, John has worked in areas of machine learning including Computer Vision, Time Series Forecasting and Privacy Enhancing Technologies. At Vector Institute, John is working on helping organizations adopt and productionalize recent advances in machine learning for use cases in health, finance, manufacturing and retail.
Workshop: Deep Learning for Time Series Forecasting with Applications in Finance
Co-Presenter: Yan Zhang
Abstract: With the advent of big data, an increasing amount of information is being captured by organizations. Often this data forms a time series – a sequence of data points indexed by time. Forecasting involves predicting future values given past observations of a time series. Common applications of time series forecasting include predicting future sales, patient outcomes, asset prices and resourcing needs. Recent deep learning-based approaches to time series forecasting have obtained state of the art performance on a variety of benchmarks. This workshop provides an overview of these methods along with applications in finance.
What You’ll Learn:
– Introduction to time series analysis
– Methods for time series forecasting (Prophet, NBEATS. DeepAR, Autoformer)
– Cross validation strategies and evaluation metrics
– Reference implementations of finance use cases
Lead Data Scientist, BMO
Yan Zhang has 10+ years experience in data science and quantitative analysis. She leads several BMO initiatives in price optimization, volume forecasting, attrition analysis, and text mining.
Workshop: Deep Learning for Time Series Forecasting with Applications in Finance
Co-Presenter: John Jewell
Abstract: With the advent of big data, an increasing amount of information is being captured by organizations. Often this data forms a time series – a sequence of data points indexed by time. Forecasting involves predicting future values given past observations of a time series. Common applications of time series forecasting include predicting future sales, patient outcomes, asset prices and resourcing needs. Recent deep learning-based approaches to time series forecasting have obtained state of the art performance on a variety of benchmarks. This workshop provides an overview of these methods along with applications in finance.
What You’ll Learn:
– Introduction to time series analysis
– Methods for time series forecasting (Prophet, NBEATS. DeepAR, Autoformer)
– Cross validation strategies and evaluation metrics
– Reference implementations of finance use cases
Insurance Technology Analyst and Consultant, Celent
Max is an analyst with Celent’s insurance practice and is based in Singapore. Max’s research concentrates on APAC insurance technology markets, with a specific focus on data science, digital innovation, process automation, and emerging technology of core systems. As part of consulting engagements and analyst’s access, he has provided perspective and recommendations in business and technology trends both globally and in an APAC capacity. Max is a speaker and panel moderator at leading insurance industry events such as InsureTech Connect Asia and Finovate Asia.
Workshop: Integrating NLP in Finance
Abstract: In this session, we will explore the insurance industry trends and its appetite for machine learning adoption. We will look at opportunities and challenges through the examination of insurers’ case studies and ML technology use cases from technology providers across the globe. This will be complemented by conceptual workflows and strategies of how the insurance industry can adopt ML into the value chain and form the next-generation insurer, driven by data, analytics, and ML. We will look into the impact of utilizing data and AI in insurance, and to spread the possibility of emerging technology into legacy insurance workflows.
What You’ll Learn: Viewers will gain insights into the insurance industry value chain, its structure and the systems commonly found in an insurance company. They will understand how data, analytics, and ML plays a role in the industry’s continuous innovation and modernization efforts.
Business Leaders: C-Level Executives, Project Managers, and Product Owners will get to explore best practices, methodologies, principles, and practices for achieving ROI.
Engineers, Researchers, Data Practitioners: Will get a better understanding of the challenges, solutions, and ideas being offered via breakouts & workshops on Natural Language Processing, Neural Nets, Reinforcement Learning, Generative Adversarial Networks (GANs), Evolution Strategies, AutoML, and more.
Job Seekers: Will have the opportunity to network virtually and meet over 30+ Top Al Companies.
Ignite what is an Ignite Talk?
Ignite is an innovative and fast-paced style used to deliver a concise presentation.
During an Ignite Talk, presenters discuss their research using 20 image-centric slides which automatically advance every 15 seconds.
The result is a fun and engaging five-minute presentation.
You can see all our speakers and full agenda here