Skip to content

Latest commit

 

History

History
54 lines (40 loc) · 4.96 KB

index.md

File metadata and controls

54 lines (40 loc) · 4.96 KB
layout description
default
I'm Rachit Bansal and I work on Natural Language Processing. More details inside!

i_am_rachit{: style="float: right; margin: 0px 20px; width: 180px;" name="fox"}

I am Rachit, an incoming PhD student at Harvard University. Broadly, I am interested in making language models useful, controllable, and accessible. I am also interested in robust evaluation and analysis.

Over the past few years, I walked my first baby steps as a researcher owing to some wonderful people and collaborations. Most recently, I was a pre-doctoral researcher at Google DeepMind, working on modularizing LLMs with Partha and Prateek. Before that, I pursued my bachelor's thesis research with Yonatan at the Technion in Israel. There I had a great time studying how intrinsic properties of a neural network are informative of generalization behaviours. Before that, I was a research intern at Adobe's Media and Data Science Research Lab, where I worked on commonsense reasoning for large language models.

I was fortunate to collaborate with Danish for more than two years to evaluate explanation methods in NLP1. I also had an amazing time working with Naomi studying mode connectivity in loss surfaces of language models2.

I also spent a couple of wonderful summers as a part of the Google Summer of Code program with the Cuneiform Digital Library Initiative (CDLI). Here, I was advised by Jacob and Niko.

News and Timeline

2024

  • August Starting my doctorate at Harvard University!
  • May Presenting our work on composing large language models at ICLR 2024 in Vienna!

2023

  • May Presenting our work on linear mode connectivity at ICLR 2023 in Kigali!

2022

  • September My bachelor's thesis work done at the Technion was accepted at NeurIPS 2022!
  • August Joining Google Research India as a pre-doctoral researcher.
  • June Releasing the pre-print for our work on analyzing linear mode connectivity and out-of-distribution behaviour. Led by Jeevesh and mentored by Naomi.
  • May Two papers on commonsense and factual reasoning done at Adobe MDSR accepted at NAACL 2022!
  • January Starting my bachelor's thesis with Yonatan at the Technion, Israel!

2021

  • November After a year-long review and revision process, our work evaluating model explanations has been accepted at TACL. In collaboration with Danish.
  • July Attending the 11th Lisbon Machine Learning Summer School (LXMLS 2021).
  • May Work with CDLI accepted at ACL SRW 2021. Gauging machine translation and sequence labeling for extremely low-resource languages.
  • May Starting as a Research Intern at Adobe’s Media and Data Science Research (MDSR).

2020

  • November Started collaborating with Danish (LTI, CMU) on evaluating neural explanations for NLP.



Footnotes

  1. Started with a meek awe-inspired email

  2. Started with a message on MLC's Discord channel