Avik Pal
avikpal at cse dot iitk dot ac dot in

Currently I am working as a Research Intern at the University of Toronto and Vector Institute where I am advised by Prof. Sanja Fidler. I am a Junior Undergraduate at Indian Institute of Technology Kanpur, where I am majoring in Computer Science and Engineering. I am broadly interested in 3D Computer Vision and Differentiable Programming. Apart from these, I am deeply invested in contributing to the open source Machine Learning Ecosystem.

I am fortunate to have been advised and mentored by fantastic researchers and engineers during the course of my undergraduate studies. Currently I am working under the supervision of Prof. Vinay Namboodiri at the DelTA Lab IIT Kanpur. We are trying to propose an algorithm for sketch to mesh synthesis.

In the past, I have worked on the development of a Differentiable Ray Tracer which uses a source-to-source automatic differentiation framework Zygote. This raytracer also acts as the backend for the differentiable Duckietown self-driving car simulator. I have also worked on Generative Adversarial Networks, and other classical Computer Vision Problems. In the summer of 2018, I successfully completed my Google Summer of Code Project under JuliaLang (NumFOCUS). I worked on the GPU Backend of Flux and attained massive speedups of the order of 17x in the Convolution Operations. Along side that I worked with the New York Office IIT Kanpur and developed a recommender system for them which was designed to tackle the cold start problem.

BTech, Computer Science
IIT Kanpur
July 2017 - Present

GSoC Participant
NumFOCUS
Summer 2018

Visiting UG Researcher
MIT
Summer 2019

GSoC Participant
JuliaLang
Summer 2019

Research Intern
University of Toronto
Jan 2020 - Present

News
Research

I'm interested in computer vision, machine learning, image processing, inverse graphics, autonomous driving, and differentiable programming. Much of my research is about modelling existing machine learning problems as differentiable programming problems and making use of the implicit knowledge stored in differentiable systems to solve them. My publications are listed below.

Publications

TorchGAN: A Flexible Framework for GAN Training and Evaluation
Avik Pal*, Aniket Das*
Under Review at JMLR Machine Learning Open Source Software (MLOSS), 2019

This paper introduces a framework for training Generative Adversarial Networks. It provides zero cost abstractions over Pytorch and hence allows rapid prototyping and research.

[preprint] [project page] [bibtex] [code]

RayTracer.jl: A Differentiable Renderer that supports Parameter Optimization for Scene Reconsruction
Avik Pal
Proceedings of the JuliaCon Conferences, 2019 (To Appear)

Photo-realistic renderers contain a vast amount of impicit knowledge. Differentiation allows such renderers to make use of gradients to learn the inverse mapping from an image to its parameter space.

[preprint] [bibtex] [code]

Fashionable Modelling with Flux
Mike Innes, Elliot Saba, Keno Fischer, Dhairya Gandhi, Marco Concetto Rudilosso, Neethu Mariya Joy, Tejan Karmali, Avik Pal, Viral Shah
Under Review at the Proceedings of the JuliaCon Conferences, 2019
NeurIPS Workshop on Systems for Machine Learning (MLSys), 2018

Flux is a deep learning framework built upon the foundation of Julia language. It yields an environment that is simple, easily modifiable, and performant.

[preprint] [pdf] [bibtex] [code]

Talks

RayTracer.jl is a package designed for differentiable rendering. In this talk, I shall discuss the inverse graphics problem and how differentiable rendering can help solve it. Apart from this we will see how differentiable rendering can be used in differentiable programming pipelines along with neural networks to solve classical deep learning problems.

[video] [slides]
Blog Posts
In this post, we extend the idea of differentiable programming to a complex self-driving car testbed and showcase the results we obtained.
In this post we explore how we have used Julia to re-think ML tooling from the ground up, and provide some insight into the work that modern ML tools need to do.
In this post I summarise the work done during my Google Summer of Code 2018 Project.
Softwares

RayTracer.jl
A Fully Differentiable Ray Tracer written in Julia. At its core it is a simple ray tracer, with so special assumption made for differentiability. We make use of the source-to-source AD framework, Zygote for computing the gradients wrt scene parameters.
[source code] [documentation] [paper]

TorchGAN
A Framework designed for research related to Generative Adversarial Networks (GANs). It aims to provide an unified API to model variants of GANs. The model zoo contains examples of models built using torchgan.
[source code] [documentation] [paper]

Flux.jl
Deep Learning Framework written in pure Julia. I help maintain this package with contributions made to improve its CPU (NNlib.jl) and GPU (CuArrays.jl) Backends. I have also helped in the development of the Computer Vision Library Metalhead.jl.
[source code] [documentation] [paper]

Micellaneous Stuff - Co-authors  /  Travel  /  Conferences  /  Intro ML Y19

Source stolen from here