Karnik Ram

I am a Research Associate in the Robotics Institute at Carnegie Mellon University where I work on active robot perception using a novel controllable depth sensor with Prof. Srinivasa Narasimhan. Previously, I worked on a camera-less and low-power indoor navigation system, and assistive technology with Prof. Kris Kitani.

Ealier, I was an MS by Research student at IIIT Hyderabad, where I worked on improving the robustness of visual-inerital odometry algorithms in dynamic environments and other robot perception problems with Prof. K. Madhava Krishna.

Even earlier, I was a care-free undergrad working on fun projects at SSN College of Engineering, Anna University.

Email | CV | Github | Twitter

What's new
  • April 2023: I will be starting as an ELLIS PhD student at TUM in Fall '23.
  • Oct 2022: Our work on assistive indoor navigation at CMU was featured in Meta Connect.
  • June 2022: Our work on using map priors for inertial odometry got accepted for IROS.
  • Oct 2021: I started as a research associate at CMU RI.
  • June 2021: Our work on RP-VIO got accepted for IROS.
Research

RP-VIO: Robust Plane-based Visual-Inertial Odometry for Dynamic Environments
Karnik Ram, Chaitanya Kharyal, Sudarshan Harithas, K. Madhava Krishna
International Conference on Intelligent Robots and Systems (IROS), 2021.

We present a monocular visual-inertial odometry (VIO) system that uses only planar features and their induced homographies, during both initialization and sliding-window estimation, for increased robustness and accuracy in dynamic environments. We evaluate on diverse sequences, including our own highly-dynamic simulated dataset, and show significant improvement over a state-of-the-art monocular VIO algorithm in dynamic environments.

Project page

Learnable Spatio-Temporal Map Embeddings for Deep Inertial Localization
Dennis Melamed, Karnik Ram, Vivek Roy, Kris Kitani
International Conference on Intelligent Robots and Systems (IROS), 2022.

We propose a data-driven prior on possible user locations in a map by combining learned spatial map embeddings and temporal odometry embeddings. Our prior learns to encode which map regions are feasible locations for a user more accurately than previous hand-defined methods, and leads to a 49% improvement in inertial-only localization accuracy when used in a particle filter.

Project page

INFER: Intermediate Representations for Future Prediction
Shashank Srikanth, Junaid Ahmed Ansari, Karnik Ram, Sarthak Sharma, J. Krishna Murthy, K. Madhava Krishna
International Conference on Intelligent Robots and Systems (IROS), 2019.

We have developed an autoregressive model to accurately predict future trajectories of traffic participants (vehicles). We demonstrate that using semantics provides a significant boost and allows the model generalize to completely different datasets, collected across several cities, and also across countries where people drive on opposite sides of the road (left-handed vs right-handed driving).

Preprint | Video

CalibNet: Geometrically Supervised Extrinsic Calibration using 3D Spatial Transformer Networks
Ganesh Iyer, Karnik Ram, J. Krishna Murthy, K. Madhava Krishna
International Conference on Intelligent Robots and Systems (IROS), 2018.

We developed a self-supervised deep network, CalibNet, capable of automatically estimating the 6-DoF rigid body transformation between a 3D LiDAR and a 2D camera in real-time. The network alleviates the need for any calibration targets, thereby reducing significant calibration efforts.

Preprint | Video

Projects

Smartphone-based Indoor Navigation
Vivek Roy, Karnik Ram, Kris Kitani  |  Summer 2022

Developed a turn-by-turn assistive indoor navigation app for iOS that combined three deep models for localization in real-time -- LSTM for bluetooth-based absolute position estimation, LSTM for IMU-based relative position estimation, LSTM + U-Net for encoding floor map information. Data collected using Meta's Project Aria Glasses were used for training the models.

Video demo | Presentation | Meta Connect feature

Automatic Calibration of Sensor Extrinsics
Karnik Ram  |  Summer 2018

An end-to-end application with a graphical user interface for easily calibrating the extrinsics between range and visual sensors was developed during GSoC 2018. Automatic and target-less calibration algorithms based on plane-matching and line-matching were integrated into the app, allowing the calibration to be performed in any generic scene setting without the need for any specific targets.

Code | Video demo | Report

ARTPARK Robotics Challenge
Suraj Bonagiri, Viswanarayanan S, Sreeharsha P, Ashwin Rao, Karnik Ram  |  Summer 2021

Mentored and worked with a team in a national-level competition on a janitorial robot to autonomously navigate and clean a washroom setup. The team was selected for the simulation and on-site rounds out of of 136 teams and finished second overall.

Challenge page


Undergrad Projects

Direct Dense Image Registration
Karnik Ram, J. Krishna Murthy  |  Fall 2017

Estimated the camera motion between two frames by minimizing the photometric error between them. Implemented in matlab using a vanilla Levenberg-Marquardt non-linear least squares (approx.) solver.

Results

Motion-based Camera-IMU Extrinsic Calibration
Karnik Ram, Kunal Chelani  |  Fall 2017

Implemented a pipeline to estimate the rigid body pose between an IMU and a camera by applying the Kabsch algorithm to their motion estimates. Based on Zachary Taylor and Juan Nieto's work on Motion-Based Calibration of Multimodal Sensor Arrays.

Report | Code

Automated Stock Counting Using a Quadcopter
Karnik Ram, Harish S, Apeksha Avinash  |  Winter 2016

Developed a visual odometry module based on optic flow for the localization of a custom-built quadcopter and incorporated it into the PX4 navigation stack, enabling autonomous indoor navigation. All the computations were performed on-board, on an Odroid XU4. A stock counting module was implemented using ArUco markers.

Report | Code

Motion Capture using a Kinect
Karnik Ram  |  Fall 2016

Used SimpleOpenNI to track body joint angles and mapped it to a model in blender. This was developed as a part of a body posture tracking project during my time as an intern at HTIC.

Code | Video

Person Tracking for a Drone
Karnik Ram  |  Summer 2016

Developed and evaluated a person tracking application for a drone using a CUDA accelerated monocular HOG detector, and another using disparity maps generated from a custom stereo rig. Both were tested on a Jetson TX1. This was developed during my time as an intern at Navstik Labs.

Visual Servoing of a Mobile Robot
Karnik Ram, Yash Oza  |  Spring 2016

Developed a simple navigation stack for controlling the motion of a differential drive mobile robot using visual feed-back from an overhead camera. The stack consisted of a color based localization module and a PID controller for issuing steering commands to the motors.

Code | Video

Low-Cost Flight Controller for a Quadcopter
Karnik Ram, Harish S, Aadithya V, Ashwin V, Harshwardhan  |  Fall 2015

Built and programmed a flight controller for stabilizing a custom built quadcopter, using the ATmega328. Utilized interrupt service routines, I2C comm, PWM and PID rate control loops.

Code

Seglio
Karnik Ram, Shankar S, Prashanth TV  |  Summer 2015

Developed an Android app that enabled students to share their course textbooks among each other easily. This app has got close to a thousand installations and has also been featured in a prominent weekly magazine, and in the top 10 of the Apps for Chennai contest.

Code | Store | Press | Press

The SSN App
Karnik Ram, Adithya J, Varun R, Muthu CT  |  Winter 2014

Ideated and developed an Android application to notify students and faculty about important events, announcements and other campus related information like bus routes and dining menus. It has close to two thousand users today and is the official app of SSN.

Code | Store | Appreciation


Blog

Built using Hugo and Jon's styling
Profile picture courtesy: Sriram
Last updated: April, 2023