Jayanta Sadhu

  •  Hi I'm Jayanta - PhD Fall'25 aspirant || MLE @ IQVIA || Researcher @ BUET || SDE @ IQVIA
  •  Passionate about Deeplearning and AI Systems, especially in the fields of Adversarial ML, Efficient ML/Finetuning and AI Fairness.
  • First Author of 3 papers.
  •  Graduated From CSE @ BUET
  •  Recipient of RISE Research Grant from RISE @ BUET for research.
  • CGPA: 3.85/4.00 . Recieved Dean's List Scholarship for academic excellence.


About Me

👋 Hi I'm Jayanta. I'm a learner and aspiring researcher. My interest lies in the fields of Natural Language Processing (NLP), AI Fairness, Trustworthy ML and Efficient ML. My current career goal is to pursue a Ph.D. in my field of interest.

💼 I am currently a Machine Learning Engineer @ IQVIA.

📜 I've been the first author of 3 research papers on Bias and Fairness. Our paper titled "An Empirical Study on the Characteristics of Bias upon Context Length Variation for Bangla" has been accepted at the findings section of the 64th Conference of Computational Linguistics (ACL'2024). Other two works of mine are on Emotion and Social Biases in LLM Responses for Bangla Language. Among these, one paper titled "An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models" has been accepted in the 5th Workshop on Gender Bias in Natural Language Processing at ACL 2024! Please refer to my research section for more details.

🏫 I am a graduate from the Department of Computer Science and Engineering(CSE) from Bangladesh University of Engineering and Technology. It is a top ranked engineering university in Bangladesh and takes pride in creating some of the best engineers of the country.

📚 My undergraduate thesis was on "Detecting Gender Bias in Bangla Language Models" under the supervision of Dr. Rifat Shahriyar. In this work, we tried to show a detailed analysis of the responses of Bangla Language models in both static and contextual settings for the detection of gender bias. Our nuanced analysis for the context of Bangla gave us an insight about how the models respond owing to the inclusion of explcit or implicit gender.

🏆 In my undergraduate life at buet, I've regularly participated and excelled at multiple competitions including Dhaka AI 2020 and HackNSU 2020. I have taken up leadership roles at my departmental occasions especially our department's annual event BUET CSE FEST.

🎡 In my leisure time, I enjoy reading thriller novels, listening to music and watching tv-series and movies.

😃 Fun Fact: My Personality type is INFP. I like to call myself an Empirical Skeptic.

Book News and Updates

All News

My Resume


Book Work Experience

  • IQVIA
    Machine Learning Engineer
    June 2024 - Present
    • Developing and managing end-to-end Machine Learning systems, focusing on fine-tuning LLMs with curated datasets and PEFT and LoRA techniques for enhanced data analytics.
    • Working with Agentic Retrieval-Augmented Generation (RAG) systems to drive innovation in data retrieval techniques to accomodate the needs of data specific answer generation.
    • Using Amazon SageMaker and Docker for model training and deployments.
    • Collaborating with a supportive team to foster growth and enhance engineering capabilities.
    tag Generative AI LLM Data Preparation Feature Engineering Multi Agent Systems Agentic RAG Model Finetuning PEFT/LORA Sagemaker Docker Pytorch

All Work Experience

Book Research & Publications

An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models
An Empirical Study of Gendered Stereotypes in Emotional Attributes for Bangla in Multilingual Large Language Models

Jayanta Sadhu, Maneesha Rani Saha (BUET), Dr. Rifat Shahriyar (Professor, BUET)

Accepted in 5th Workshop on Gender Bias in Natural Language Processing at ACL 2024
April 2024 - July 2024

Details: In this study, we conducted a research that investigated gendered stereotypes in emotional attributes within multilingual large language models (LLMs) for Bangla. The study analyzed historical patterns, revealing how women were often associated with emotions like empathy and guilt, while men were linked to emotions such as anger and authority in Bangla-speaking regions. We evaluated both closed and open-source LLMs to identify gender biases in emotion attribution. The project included qualitative and quantitative analysis of LLM responses to Bangla gender attribution tasks and highlighted the influence of gendered role selection on these outcomes. We also developed and publicly shared datasets and code to support further research in Bangla NLP. Please refer to the paper link for more details.

tag LLM Emotion Attributes Bangla Gender Bias Inference
Social Bias in Large Language Models For Bangla: An Empirical Study on Gender and Religious Bias
Social Bias in Large Language Models For Bangla: An Empirical Study on Gender and Religious Bias

Jayanta Sadhu, Maneesha Rani Saha (BUET), Dr. Rifat Shahriyar (Professor, BUET)

May 2024 - July 2024

Details: This research project, supervised by Dr. Rifat Shahriyar, focused on examining social biases in large language models (LLMs) for the Bangla language. The study involved investigating two distinct types of social biases in Bangla LLMs. We developed a curated dataset to benchmark bias measurement and implemented two probing techniques for bias detection. This work represents the first comprehensive bias assessment study for Bangla LLMs, with all code and resources made publicly available to support further research in bias detection for Bangla NLP. Please refer to the paper link for more details.

tag LLM Bangla Gender Bias Bangla Religious Bias Fairness OpenAI Data Analysis
An Empirical Study on the Characteristics of Bias upon Context Length Variation for Bangla
An Empirical Study on the Characteristics of Bias upon Context Length Variation for Bangla

Jayanta Sadhu*, Ayan Antik Khan (BUET)*, Abhik Bhattacharya (BUET), Dr. Rifat Shahriyar (Professor, BUET) (* equal contribution)

Accepted at ACL 2024 findings section
June 2023 - April 2024

Details: This research project, which formed the basis of my undergraduate thesis under the supervision of Dr. Rifat Shahriar, explored the nuances of gender bias detection in Bangla language models. We constructed a curated dataset for detecting gender bias in both static and contextual setups and compared different bias detection methods specifically tailored to Bangla. The study established benchmark statistics using baseline methods and analyzed bias in various language models supporting Bangla, including BanglaBERT, MuRIL, and XML-RoBERTa. A key focus was understanding how the context length of templates and sentences affects bias detection outcomes. This work serves as a foundational study for bias detection in Bangla language models. Please refer to the paper link for more details.

tag Bias and Fairness Bangla Random Effect Models Contextual Embeddings Bias Benchmarking Statistical Analysis Data Distribution
All Research and Publications

education Education

  • B.Sc. in Computer Science and Engineering
    Bangladesh University of Engineering and Technology
    May 2018 - May 2023
    CGPA: 3.85/4.00
  • H.S.C. in Science
    Dhaka Residential Model College
    July 2015 - June 2017
    GPA: 5.00/5.00

skill Technical Skills

  • Programming Languages
    C C++ C# Java Python JavaScript SQL Kotlin bash
  • Machine Learning Libraries
    Numpy Pandas Scikit-learn Pytorch Transformers OpenAI Langchain OpenGL
  • Frameworks
    Asp Dotnet Django Bootstrap JQuery Nodejs Flask
  • Database
    Oracle PostgreSQL MySql MSSQL MongoDB
  • Cloud
    AWS VastAI
  • Security Tools
    Autopsy Wireshark NMap
  • Graphic Designing
    Figma Canva
  • Miscellaneous
    LATEX Firebase Git Docker Kubernetes
  • Soft Skills
    Team management Problem‑solving Public Speaking Creative Writing