Mayank Singh

I am a second year graduate student at The Robotics Institute (MSR), Carnegie Mellon University. I am advised by Prof. Katerina Fragkiadaki and collaborate with Prof. Shubham Tulsiani. My research interests are 3D computer vision, adversarial machine learning, generative models and few-shot learning.

Before this, I was working at Media and Data Science Research Labs Adobe, India and had the opportunity to collaborate with Prof. Vineeth N Balasubramanian. I graduated from Indian Institute of Technology Kharagpur in Mathematics and Computing.

[Email]    [LinkedIn]    [Resume]    [Google Scholar]


Selected Publications
project_img

Analogy-Forming Transformers for Few-Shot 3D Parsing
Nikolaos Gkanatsios* Mayank Singh*, Zhaoyuan Fang, Shubham Tulsiani, Katerina Fragkiadak

We present Analogical Networks, a model that casts fine-grained 3D visual parsing as analogy-forming inference: instead of mapping input scenes to part labels, which is hard to adapt in a few-shot manner to novel inputs, our model retrieves related scenes from memory and their corresponding part structures, and predicts analogous part structures in the input object 3D point cloud, via an end-to-end learnable modulation mechanism.

Work acccepted at ICLR 2023. [Paper] [Webpage] [Code]

project_img

Attributional Robustness Training using Input-Gradient Spatial Alignment
Mayank Singh*, Nupur Kumari*, Puneet Mangla, Abhishek Sinha, Vineeth N Balasubramanian, Balaji Krishnamurthy

We propose a robust attribution training methodology ART that maximizes the alignment between the input and its attribution map. ART induces immunity to adversarial and common perturbations on standard vision datasets. It achieves state-of-the-art performance in weakly supervised object localization on CUB dataset.

Work acccepted at ECCV 2020. [Paper] [Webpage] [Code]

project_img

Harnessing the Vulnerability of Latent Layers in Adversarially Trained Models
Nupur Kumari*, Mayank Singh*, Abhishek Sinha*, Harshitha Machiraju, Balaji Krishnamurthy, Vineeth N Balasubramanian

We Analyze the adversarial trained models for vulnerability against adversarial perturbations in the latent layers. The algorithm achieved the state-of-the art adversarial accuracy against strong adversarial attacks.

Work accepted at IJCAI, 2019. [Paper] [Code]

project_img

Data InStance Prior (DISP) in Generative Adversarial Networks
Puneet Mangla*, Nupur Kumari*, Mayank Singh*, Vineeth N Balasubramanian, Balaji Krishnamurthy,

We propose a novel transfer learning technique for GANs in the limited data domain by leveraging informative data prior derived from self-supervised/supervised pretrained networks trained on a diverse source domain.

Work accepted at WACV, 2022. [Paper]

project_img

LT-GAN: Self-Supervised GAN with Latent Transformation Detection
Parth Patel*, Nupur Kumari*, Mayank Singh*, Balaji Krishnamurthy,

We propose a self-supervised approach (LT-GAN) to improve the generation quality and diversity of images by estimating the GAN-induced transformation (i.e. transformation induced in the generated images by perturbing the latent space of generator).

Work accepted at WACV, 2021. [Paper]

project_img

Charting the Right Manifold: Manifold Mixup for Few-shot Learning
Puneet Mangla*, Mayank Singh*, Abhishek Sinha*, Nupur Kumari*, Vineeth N Balasubramanian, Balaji Krishnamurthy

We Use self-supervision techniques like rotation prediction and exemplar, followed by manifold mixup for the few-shot classification tasks. The proposed approach beats the current state-of-the-art accuracy on mini-ImageNet, CUB and CIFAR-FS datasets by 3-8%.

Work accepted at WACV, 2020. [Paper] [Code]

project_img

On the Benefits of Models with Perceptually-Aligned Gradients
Gunjan Aggarwal*, Abhishek Sinha*, Nupur Kumari*, Mayank Singh*

In this paper, we leverage models with interpretable perceptually-aligned features and show that adversarial training with low max-perturbation bound can improve the performance of models for zero-shot and weakly supervised localization tasks.

Work accepted at ICLR workshop Towards Trustworthy ML, 2020. [Paper]

* denotes equal contribution


inspired from this website