<feed xmlns="http://www.w3.org/2005/Atom"> <id>https://nikhil-chigali.github.io/</id><title>Nikhil Chigali</title><subtitle>I'm Nikhil, an AI &amp; ML engineer working on Agentic AI, RAG systems, and LLMOps. I build LLM pipelines that run in production: hybrid retrieval, multi-agent coordination, evaluation frameworks, and the monitoring to keep them reliable. Previously an ML &amp; DS Consultant at Microsoft; MS CS from Rice University.</subtitle> <updated>2026-04-19T17:15:27+00:00</updated> <author> <name>Nikhil Chigali</name> <uri>https://nikhil-chigali.github.io/</uri> </author><link rel="self" type="application/atom+xml" href="https://nikhil-chigali.github.io/feed.xml"/><link rel="alternate" type="text/html" hreflang="en" href="https://nikhil-chigali.github.io/"/> <generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator> <rights> © 2026 Nikhil Chigali </rights> <icon>/assets/img/favicons/favicon.ico</icon> <logo>/assets/img/favicons/favicon-96x96.png</logo> <entry><title>Simple Perceptron Training Algorithm: Explained</title><link href="https://nikhil-chigali.github.io/posts/simple-perceptron-training-algorithm/" rel="alternate" type="text/html" title="Simple Perceptron Training Algorithm: Explained" /><published>2018-07-06T18:30:00+00:00</published> <updated>2018-07-06T18:30:00+00:00</updated> <id>https://nikhil-chigali.github.io/posts/simple-perceptron-training-algorithm/</id> <content type="text/html" src="https://nikhil-chigali.github.io/posts/simple-perceptron-training-algorithm/" /> <author> <name>Nikhil Chigali</name> </author> <category term="deep-learning" /> <category term="neural-networks" /> <summary>This post was originally published on Medium on July 7, 2018. View original article. There’s this quote I love from Bill Gates: “I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it.” That’s honestly how I think about AI. We’re trying to solve some really hard problems, so we look around for the easiest path, and more often than not, nature has already...</summary> </entry> <entry><title>Logistic Regression in a Nutshell</title><link href="https://nikhil-chigali.github.io/posts/logistic-regression-in-a-nutshell/" rel="alternate" type="text/html" title="Logistic Regression in a Nutshell" /><published>2018-06-28T18:30:00+00:00</published> <updated>2018-06-28T18:30:00+00:00</updated> <id>https://nikhil-chigali.github.io/posts/logistic-regression-in-a-nutshell/</id> <content type="text/html" src="https://nikhil-chigali.github.io/posts/logistic-regression-in-a-nutshell/" /> <author> <name>Nikhil Chigali</name> </author> <category term="machine-learning" /> <category term="supervised-learning" /> <summary>This post was originally published on Medium on June 29, 2018. View original article. Get ready to classify some data! Logistic regression is a supervised ML algorithm. A bit of calculus, probability, and statistics will go a long way here. Problem Setup Say we have a training dataset with binary labels (0 or 1). We want to train a model that tells us the probability of a sample belonging ...</summary> </entry> <entry><title>All You Need to Know About Maximum Likelihood Estimation</title><link href="https://nikhil-chigali.github.io/posts/maximum-likelihood-estimation/" rel="alternate" type="text/html" title="All You Need to Know About Maximum Likelihood Estimation" /><published>2018-06-20T18:30:00+00:00</published> <updated>2018-06-20T18:30:00+00:00</updated> <id>https://nikhil-chigali.github.io/posts/maximum-likelihood-estimation/</id> <content type="text/html" src="https://nikhil-chigali.github.io/posts/maximum-likelihood-estimation/" /> <author> <name>Nikhil Chigali</name> </author> <category term="mathematics" /> <category term="statistics" /> <summary>This post was originally published on Medium on June 21, 2018. View original article. Probability Basics Probability is the chance of an event $A$ occurring out of all possible events in the sample space $S$: [P(A) = \frac{ A }{ S }] Joint Probability Joint probability is when multiple events occur at the same time. For events $A$ and $B$:...</summary> </entry> <entry><title>Gradient Descent: Backbone of Most Popular Machine Learning Algorithms</title><link href="https://nikhil-chigali.github.io/posts/gradient-descent-backbone-of-ml-algorithms/" rel="alternate" type="text/html" title="Gradient Descent: Backbone of Most Popular Machine Learning Algorithms" /><published>2018-06-17T18:30:00+00:00</published> <updated>2018-06-17T18:30:00+00:00</updated> <id>https://nikhil-chigali.github.io/posts/gradient-descent-backbone-of-ml-algorithms/</id> <content type="text/html" src="https://nikhil-chigali.github.io/posts/gradient-descent-backbone-of-ml-algorithms/" /> <author> <name>Nikhil Chigali</name> </author> <category term="mathematics" /> <category term="optimization" /> <summary>This post was originally published on Medium on June 18, 2018. View original article. Introduction to Machine Learning Before I get into gradient descent, let me quickly walk through how I think about machine learning. Almost every ML algorithm boils down to four steps: Define a model Make predictions on the training data Calculate the prediction error Tune the model’s weights to r...</summary> </entry> </feed>
