Description

Discover how privacy-preserving techniques like analytical embedding enable modeling with sensitive data you can't see, offering powerful insights without compromising data security or compliance.

What if you could model with sensitive data that you can't see? This session introduces analytical embedding spaces—a technique that transforms raw data into compressed synthetic trends, allowing insights to be gained from information while maintaining segregation between modelers and sensitive data . By modeling in this transformed space, we capture the signal without compromising the source!

We'll show a use case where healthcare data is off-limits, but consumer data is available. By embedding both datasets and modeling with synthetic trends, we demonstrate that "sight unseen" modeling can outperform traditional methods - improving disease prediction without ever accessing the raw health data. This approach stems from federated learning, where insights are derived without  accessing sensitive information.

Designed for secure AI, this approach respects data and protects against reconstruction attacks. Embeddings act as a privacy-preserving interface, ensuring that even if adversaries gain access to the model, they cannot reverse-engineer the original inputs. This framework is particularly powerful in regulated industries, where data utility and compliance need to coexist.If you've ever been told, "You can't use that data," this session will show you how you can  - just not the way you thought you could.

Details

October 3, 2025

10:45 am

-

11:30 am

Grand Ballroom

Add to Calendar

Track:

Level:

Intermediate

Tags

No items found.

Presenters

Devyani Biswal
Sr AI Strategy Consultant
IQVIA

Devyani Biswal is a Senior AI Strategy Consultant for IQVIA Applied AI Science, focused on advancing trustworthy AI in healthcare. Her work explores methods for modeling on sensitive data using techniques such as federated learning and privacy-enhancing technologies. With a background in statistics and differential privacy, she bridges the gap between theoretical research and applied machine learning.

Devyani leads innovation in AI/ML modeling methods to generate insights from federated data environments and drives the research agenda in AI science and trustworthy analytics. She regularly publishes in peer-reviewed venues, contributes to standards development, and has presented her research to international data protection authorities.

Beyond her technical contributions, Devyani authors a deep learning newsletter and co-created the Data & AI Strategy Canvas to help organizations align technical innovation with real-world impact. She is passionate about scaling trustworthy AI in regulated industries and enabling better insights without compromising individual privacy.