akshath raghav ravikiran
ECE @ Purdue | AI Engineer/Researcher

I’m a final-year student @ Purdue, majoring in Electrical & Computer Engineering.
I’m passionate about developing robust, user-facing solutions that combine explainable learning algorithms, seamless system interoperability, and hardware–software co‑optimized platforms.
I’m excited about AI Accelerator Microarch, ML Compilers and Data-Center Infrastructure. Feel free to reach out to me at araviki[at]purdue[dot]edu.
Fun lil' animation of the Fridrich (CFOP) speedsolving method
background
- S’25: Worked under the AI Hardware Team at SoCET, and am responsible for a few parameterizable IPs used in the AMP0 and AMP1 Tensor-Cores. Implemented the Gustavson-based Sparse Mat-Mul benchmarks for the SoCET GPU team’s V3 architecture. Completed my Capstone project (through ECE 49022), and won the Senior Design award for the BoilerNet system. Led the backend team under (Beck’s Hybrids)-sponsored project through the Data Mine. Funded by HII to work on a custom FFT-accelerator ASIC.
- F’24: Joined the Purdue SoCET Team as a Teaching Assistant. Joined the I-GUIDE Team as a Research Assistant (TDM HDR Fellowship), where we worked on scaling a HPC workflow for distributed inference on Apache Spark (+Sedona) through GCP (accepted at I-GUIDE Forum ‘25).
- Smr’24: Joined the Purdue SoCET group, in the Digital Design team. Over the summer, I added the Zicond Extension to the RISCV core for the AFTx07 tape-out. Interned at VLSI System Design, where I implemented the TinySpeech family of speech recognition models for their vsdsquadronmini boad. I also wrote an ANSI-C based inference engine which runs out-of-box in 8bit precision, with 91%+ accuracies for embedded inference.
- F’23 - S’24: Worked at the Duality Lab, where we re-engineered the MaskFormer segmentation model (funded by Google!) from the PyTorch-based artifact to TensorFlow for publishing to the TF Model Garden. You can find our paper here and code here. I also generated figures for the PeaTMOSS paper (accepted at MSR’24).
- S’24: Led a project at the CVES group @ Purdue ECE, where our goal was to define and evaluate reproducibility within AI/ML projects. I wrote the codebase for building our pipeline and statistically defined the importance of parameters.
- S’24: I was involved in MultiModal (LM) understanding projects at the e-lab. I’ve built eugenie & grammarflow.
- S’23 - Smr’23: Interned at Ambee, where I deployed a worldwide fire forecasting system (F3) into their API and wrote automated scripts for their environment-data focused data lakes (still in use). You can find the whitepaper here and access the API here.
- F’22 - S’23: Helped lead a project supervised by Prof. Yuan Wang (currently at Stanford) where we aimed to correlate lightning activity with wildfire spread. Wrote (big-)data-interfacing code for satellites across EUR/EUS/SAR, and was responsible for packing them to use within a ConvLSTM model from DeepCube’s short-term forecasting.
Find my reports here.
extras
In my free time, I enjoy photography, reading manga, speed typing (130+ wpm), and whittling.
I can speak English, Hindi, Tamil, and Kannada. Currently, I’m learning German and ASL.
I enjoy Bowling, Billiards and Golf. I follow the NBA (Warriors) and IPL (RCB).
news
May 01, 2025 | Grateful to receive the “Senior Design” award for BoilerNet. |
---|---|
Aug 06, 2024 | Grateful to recieve the Purdue OUR Scholars and DUIRI Scholarships. Excited to be starting as a research assistant in the NSF-funded I-GUIDE team. |
Apr 30, 2024 | Our team’s report, “A Partial Replication of MaskFormer in TensorFlow on TPUs for the TensorFlow Model Garden,” is now available on arXiv! Find the code here, and report here. |
Apr 23, 2024 | Received the Outstanding Sophomore in VIP award for the work I did at the CVES group. Read about it here. |
Mar 05, 2024 | Results for GrammarFlow updated to reflect high guarantee in LLM parsing. Tested with Llama, Mistral and Dolphin families. Read about it here. |
latest posts
Aug 06, 2024 | [TL;DR] Energy-Based Transferability Estimation |
---|---|
Mar 23, 2024 | Set up Llama.cpp on university compute clusters 🦙 |