Surgan Jandial

I am a research associate at Adobe MDSR Labs, Noida. Prior to this in 2021, I earned my Bachelors degree in Computer Science from Indian Institute of Technology (IIT), Hyderabad, where I was fortunate to work with Prof. Vineeth N Balasubramanian.

My research interests are in making AI systems/pipelines:

  • Resource-Efficient for widespread access: efficient model training, efficient model selection, efficient model size, data efficiency via synthetic data;

  • Safe for long-term usage or deployment: model fairness, model security.

My experiences have convinced me that in our quest for efficiency, we often overlook the aspects of safety. Thus, I strive to simultaneously achieve both. I have begun this exploration by focusing on models that are both compact and fair.

When I am not working on any of the above, I am likely researching interesting Computer Vision, LLM, and VLM applications.

Email  /  List of Patents  /  Preprints  /  Some personal interests

profile photo      Google Scholar  /  Linkedin  /  Twitter
Conferences
(* denotes equal contribution)
Publications Component
All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation
Pragyan Banerjee*,   Abhinav Java*,   Surgan Jandial*,   Simra Shahid *,   Shaz Furniturewala,   Balaji Krishnamurthy,   Sumit Bhatia

AAAI, 2024

Model Fairness Large Language Models
Retro-KD: Past States for Regularizing Targets in Teacher-Student Learning
Surgan Jandial*,   Yash Khasbage*,   Arghya Pal,   Balaji Krishnamurthy,
Vineeth N Balasubramanian

CODS-COMAD, 2023 (Oral)

Knowledge Distillation Model Compression

One-Shot Doc Snippet Detection: Powering Search in Document Beyond Text
Abhinav Java*,   Shripad Deshmukh*,   Milan Aggarwal,   Surgan Jandial,   Mausoom Sarkar,   Balaji Krishnamurthy

WACV, 2023

Applications Computer Vision

Distilling the Undistillable: Learning from a Nasty Teacher
Surgan Jandial,   Yash Khasbage,   Arghya Pal,   Vineeth N Balasubramanian,  
Balaji Krishnamurthy

ECCV, 2022

Knowledge Distillation Model Stealing Model Security

SAC: Semantic Attention Composition for Text-Conditioned Image Retrieval
Surgan Jandial*,   Pinkesh Badjatiya*,   Pranit Chawla*,   Ayush Chopra*,   Mausoom Sarkar,   Balaji Krishnamurthy

WACV, 2022

Applications Computer Vision Vision Language Models

Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks
Surgan Jandial*,   Ayush Chopra*,   Mausoom Sarkar,   Piyush Gupta,   Balaji Krishnamurthy,   Vineeth N Balasubramanian

KDD, 2020  

Efficient Model Training
SieveNet: A Unified Framework for Robust Image-Based Virtual Try-On
Surgan Jandial*,   Ayush Chopra*,   Kumar Ayush*,   Mayur Hemani,   Balaji Krishnamurthy,   Abhijeet Halwai

WACV, 2020
Also presented at Workshop on AI for Content Creation, CVPR 2020

Media Coverage:  Venturebeat / Beebom / WWD

Applications Computer Vision
Workshops
(* denotes equal contribution)
Towards Fair Knowledge Distillation using Student Feedback
Abhinav Java *,   Surgan Jandial* ,   Chirag Agarwal

Workshop on Efficient Systems for Foundation Models, ICML 2023
Under review at a Top-Tier ML Conference

Model Fairness Knowledge Distillation Vision Language Models
Gatha: Relational Loss for enhancing text-based style transfer
Surgan Jandial,   Shripad Deshmukh,   Abhinav Java,   Simra Shahid,   Balaji Krishnamurthy

6th Workshop on Computer Vision for Fashion, Art, and Design, CVPR 2023 (Oral)

Synthetic Data Generation Vision Language Models
Self-supervised Autoencoder for Correlation-Preserving in Tabular GANs
Siddarth Ramesh*,   Surgan Jandial*,   Gauri Gupta*,   Piyush Gupta,   Balaji Krishnamurthy

Data-centric Machine Learning Research (DMLR) Workshop, ICML 2023

Synthetic Data Generation Tabular Data
Contextual Alchemy: A Framework for Enhanced Readability through Cross-Domain Entity Alignment
Simra Shahid,   Nikitha Srikanth,   Surgan Jandial,   Balaji Krishnamurthy

Workshop on Machine Learning for Creativity and Design, Neurips 2023

Applications Large Language Models
On Conditioning the Input Noise for Controlled Image Generation with Diffusion Models
Vedant Singh*,   Surgan Jandial *,   Ayush Chopra,   Siddarth Ramesh,   Balaji Krishnamurthy,   Vineeth N Balasubramanian

Workshop on AI for Content Creation, CVPR 2022

Synthetic Data Generation
Leveraging Style and Content features for Text Conditioned Image Retrieval
Pranit Chawla ,   Surgan Jandial,   Pinkesh Badjatiya,   Ayush Chopra,   Mausoom Sarkar,   Balaji Krishnamurthy

Workshop on Computer Vision for Fashion, Art and Design, CVPR 2022

Applications Computer Vision Vision Language Models
AdvGAN++: Harnessing latent layers for adversary generation
Puneet Mangla*,   Surgan Jandial*,   Sakshi Varshney*,   Vineeth N Balasubramanian

Neural Architect Workshop, ICCV 2019

Robustness Computer Vision
Robust Cloth Warping via Multi-Scale Patch Adversarial Loss for VirtualTry-On Framework
Kumar Ayush*,   Surgan Jandial*,   Ayush Chopra*,   Mayur Hemani,
Balaji Krishnamurthy

Workshop on Human Behaviour Understanding, ICCV 2019

Applications Computer Vision
Powering Virtual Try-On via Auxiliary Human Segmentation Learning
Kumar Ayush*,   Surgan Jandial*,   Ayush Chopra*,   Balaji Krishnamurthy

Workshop on Computer Vision for Fashion, Art and Design, ICCV 2019

Applications Computer Vision
Preprints
(* denotes equal contribution)
Leveraging style-based relations for text-conditioned style transfer
Surgan Jandial*, Silky Singh*, Simra Shahid*, Abhinav Java, Shripad Deshmukh

Under review at a Top-Tier ML Conference

Synthetic Data Generation Style Transfer Vision Language Models
Patents
  1. Issued Cloth Warping Using Multi-Scale Patch Adversarial Loss
                             Application granted on 06/08/2021. US Patent number 11080817
  2. Issued Accurately Generating Virtual Try-On Images Utilizing a Unified Neural Network Framework
                             Application granted on 08/03/2021. US Patent number 11030782
  3. Issued Text-Conditioned Image Search with Transformation, Aggregation, and Composition of Visio-Linguistic Features
                             Application granted on 08/08/2023. US Patent number 11720651
  4. Issued Model Training with Retrospective Loss
                             Application granted on 10/24/2023. US Patent number 11797823
  5. Filed Text-Conditioned Image Search Based on Dual-Disentangled Feature Composition
                         Filled at the US Patent Office on 1/28/2021
  6. Filed Regularizing Targets in Model Distillation Utilizing Past State Knowledge of Students
                         Filled at the US Patent Office on 8/9/2022
  7. Filed Diffusion Model Image Generation
                         Filled at the US Patent Office on 8/31/2022
  8. Filed Systems and Methods for Data Augmentation
                         Filled at the US Patent Office on 10/11/2022
  9. Filed Systems and Methods for Machine Learning Transferability
                         Filled at the US Patent Office on 3/3/2023
  10. Filed Form Structure Similarity Detection
                         Filled at the US Patent Office on 3/27/2023
  11. Filed Personalized Form Error Correction Propagation
                         Filled at the US Patent Office on 4/27/2023
  12. Filed Knowledge Distillation Using Contextual Semantic Noise
                         Filled at the US Patent Office on 2/22/2023
  13. Filed Systems and Methods for Generating Synthetic Tabular Data for Machine Learning and Other Applications
                         Filled at the US Patent Office on 4/3/2023
  14. Filed One-Shot Document Snippet Search
                         Filled at the US Patent Office on 6/30/2023
  15. Filed Generating Alternative Examples for Content
                         Filled at the US Patent Office on 11/3/2023
  16. In-Filing A Novel Method and Apparatus for Text-Guided Style Transfer
                               Internally approved at Adobe Inc. in June 2023 for filing
  17. In-Filing A Novel Framework for Bias Aware Distillation using Student Feedback
                               Internally approved at Adobe Inc. in December 2023 for filing
  18. Submitted A Novel Framework for Counterfactually Aware Fair Text Generation
                                   Awaiting internal approval at Adobe Inc.
  19. Submitted Mask-CLIPstyler: Localized text-based style transfer in images
                                   Awaiting internal approval at Adobe Inc.

this made my life easy.