Learn from the Testing Experts

4th June, 2025

AUSTIN

>> Home

>> Register

>> Programme Schedule

Keynote Speaker

lakshmi vidya peri

Lakshmi Vidya Peri

Board of Director
TMMi America

Talk : PRODUCT QUALITY IN THE AGE OF AI

It will be more about how product quality was measured traditionally, how are we evolving with AI and going to towards operational excellence and higher product quality and early time to market. It will talk about AI -driven systems, importance of simulations, importance of data governance, AI governance, what metrics can we measure in future, Explainable AI and how we can transition into the future of agentic AI usage, etc.

Features Speakers

Artem Bondar

Artem Bondar

Test Engineer, Educator, Content Creator
Bondar Academy

API TESTING WITH PLAYWRIGHT: FAST AND SCALABLE SCRIPTING APPROACH

Playwright is known as No1 trending framework for UI automation in the industry, beating Selenium and Cypress in 2024 by the speed of adoption. What people don’t know, that this framework also can be used for API testing and it can be pretty powerful all-in-one tool if to follow the correct achitecture approach in framework design. In this presentation, I will show how API testing in Playwright can be organized to make is scalable.

Takeaways from this talk

People will learn how to approach the test design using the Playwright framework for API testing for maximum scripting efficiency.

Himanshu Pathak

Himanshu Pathak

QA Engineering Lead
Meta

AUTOMATED TESTING USING ARTIFICIAL INTELLIGENCE (AI) AND MACHINE LEARNING (ML)

Automated testing is a cornerstone of modern software development, ensuring quality and reliability while accelerating delivery timelines. With the advent of Artificial Intelligence (AI) and Machine Learning (ML), automated testing is undergoing a transformative shift. This paper explores how AI and ML are reshaping automated testing, their benefits, key applications, challenges, and future prospects.

Takeaways from this talk

Automated testing using AI and ML boosts efficiency, accuracy, and coverage while reducing costs. Key applications include software, mobile, web, and IoT testing. However, challenges like data quality, bias, and security must be addressed. Future directions include explainable AI, edge AI, autonomous testing, and human-AI collaboration. Effective implementation requires specialized talent and skills, and a strategic approach to integration and deployment

Vignesh Govindarajan Ravichandran

Vignesh Govindarajan Ravichandran

Assistant Vice President – Quality Engineering
Wellington Management LLP

MASTER DATA PIPELINE TESTING FOR AI AND NLP

Data pipelines are ubiquitous in handling unstructured data to be used for Generative AI and NLP-based projects. This talk will help the audience master testing techniques for Data pipelines, challenges encountered in a real fintech project, tools available in the market and DevOps adoption to continuously improve the process

Takeaways from this talk

  • Introduction to Data Pipelines
  • Test strategy for continuous testing of pipelines and best practices – Pre deployment, Monitoring and feedback
  • Challenges with Testing AI and NLP – Use of unstructured data as input and inherent challenges like Test Data bias, false positives, model performance evaluation, language complexity and Data Quality issue.
  • Tools and frameworks – General framework for automation and tools used for CI/CD, application monitoring and error handling
  • Continuous monitoring and improvement – How to monitor production, learn and improve the process
Raghavender Reddy Vanam

Raghavender Reddy Vanam

Senior QA Automation Engineer
Reinsurance Group of America

PIONEERING INTELLIGENT TEST AUTOMATION FOR MODERN APPLICATIONS

The presentation addresses the challenge of enabling frequent deployments while balancing speed and quality. Solutions include integrating test automation with a process-focused approach, implementing self-healing automation tests, building components with minimal maintenance and maximum reusability, and utilizing risk-based testing.

Takeaways from this talk

Test automation which includes data-driven generative AI and keyword-driven testing, dynamic element handling, self-healing scripts, integration with reporting tools, and AI-driven test case generation.

kailash Thiyagarajan

Kailash Thiyagarajan

Senior Machine Learning Engineer
Apple

THE ROLE OF LARGE LANGUAGE MODELS IN SOFTWARE TESTING

Large Language Models (LLMs) have emerged as powerful tools in software testing, offering automation capabilities that can enhance test generation, debugging, program repair, and system validation. Traditional software testing is resource-intensive, requiring significant manual effort. However, LLMs provide a scalable, AI-driven approach to improve efficiency and effectiveness.

This talk will explore how LLMs can be leveraged in different phases of software testing, including:

Unit test case generation for automatic creation of meaningful tests.
Test oracle design to validate expected behavior.
Bug detection and debugging for automated fault localization and fixes.
System test input generation for broader test coverage.
Integration with traditional CI/CD workflows to streamline testing pipelines.
While LLMs offer significant benefits, challenges such as test coverage limitations, false positives, and model interpretability remain. We will discuss potential solutions and opportunities for improving LLM-based testing methodologies to make software quality assurance more efficient and reliable

Takeaways from this talk

By the end of this session, attendees will:

  • Understand the role of LLMs in software testing – from generating test cases to automating debugging.
  • Learn about key LLM-based testing techniques, including zero-shot and few-shot learning for different testing phases.
  • Explore real-world challenges and limitations, such as coverage issues, oracle accuracy, and false positives in bug detection.
  • Discover best practices for integrating LLMs into existing software testing pipelines and CI/CD workflows.
  • Gain insights into future research directions, including improving prompt engineering, leveraging multimodal LLMs, and combining AI with traditional testing techniques
Aaron E

Aaron Evans

QA Architect
One Shore

TESTING STRATEGY & TACTICS

An overview about how to tackle the challenges of
software testing and quality assurance.

  • Identifying the challenges to testing that you face in your organization
  • Deciding how and where to deploy your valuable and limited resources
  • Choosing the right technologies for your specific problem and team
  • Knowing which problems to fight and when to

There is no one size fits all solution, but I hope to give you a framework for building the QA Strategy to craft the best testing solution that fits your own needs.

Takeaways from this talk

You should be able to create a high level strategy for testing that fits your organization’s needs and develop a framework for choosing the right tactics (including tools, processes, and coverage) to meet your teams challenges and capabilities.

Including such questions as:

  • When and what to automate
  • How to choose the right tools & technologies for your team
  • Discovering your known testing gaps and challenges
  • Identifying risks and planning for other contingencies
  • Leveraging the unique & diverse skills of your people to get most impact

Panel Discussion Speakers

Swathy V

Swathy V

Vice President – Engineering
Freedom Mortgage

Swathy V.

As VP of Engineering at Freedom Mortgage, I drive engineering transformation, delivering innovative, customer-centric solutions. With 20+ years in fintech, mortgage, and IoT, I excel in DevSecOps, AI, cloud applications, and agile delivery. A servant leader, I inspire teams through vision, collaboration, and a culture of innovation and excellence.

LeAnn Wang

LeAnn Wang

IT Director, Quality and Testing
Emerson

LeAnn Wang

Progressive IT experience across industries including Oil & Gas, Sales, Auto, Health Insurance, Software, and more. SAFe 5.0 certified Program Consultant, Agile expert, Scrum Master, Product Owner, and Manager. Skilled in Jira, Azure DevOps, .NET, SQL, and SharePoint. Strong communicator with exceptional leadership and client-centric expertise.

Vladimir Baibus

Vladimir Baibus

Director of Quality Assurance
Homecare Homebase

Vladimir Baibus

Vladimir Baibus is a Quality Engineering Leader with expertise in QA, automation frameworks, and shift-left testing. Skilled in managing global teams, strategic planning, and driving product excellence across domains. Proficient in tools like Selenium, Appium, and Playwright, with experience in Agile, BDD, and TDD methodologies

preetham sunilkumar

Preetham Sunilkumar

Vice Presdient, Software Development
LPL Financial

Preetham Sunilkumar

Technology leader with expertise in quality engineering, software development, and architecture. Skilled in leading large-scale transformations, AI/ML-driven quality engineering, and agile practices. Proven track record in enterprise automation, test automation, and architecture governance, driving innovation, value realization, and team empowerment through data-driven decisions and best practices.

>> Home

>> Register

>> Programme Schedule