Online Classification for Big Data

Highlighting my experience during coding of Stochastic Gradient Descent algorithm

Stochastic Gradient Descent is an online classification algorithm. This algorithm proves to be very efficient in classification of huge big data problems. Unlike Logistic algorithm, which is somewhat ancestor of this, it takes one row at a time from the input data which can also be called as tuple instead of a whole matrix. The data possess by the tuple undergoes computation and result gets added to previous computed record.
There is a huge possibility of the usage of this algorithm in Big data analytics. Most common use case has been seen in medical field wherein the system accepted input from the patient on basis of cholesterol level and other parameters, and with help of this algorithm gave a probability of that individual having heart disease. link  

Through this blog I intent to share my team’s experience in coding this feature, as well as documenting major findings during course of time.

It was pretty difficult for us to find any c# code for the same algorithm. Initially proved to be of some help related to Logistic learning, but it didn’t have any help related to SGD which was our objective.

Nevertheless, we found some help from few articles and Java implementation of SGD in Apache Mahout. After much learning we had enough data to code in .net technology and came up with following findings.
1.       We coded for Online Stochastic gradient using lazy stochastic method, which was easier to implement.
2.       We calculated for Probability per epoch, log likelihood per epoch, error gradient and learning rate.

3.       Input File Format:

For verification of result we took input csv file as

Note here, the dependent column is B and C,D,E,F are the independent column. Let me portray a scenario for better understanding. Assuming the first row(epoch) is having a patient entry. You can read this row as
·         person name – a
·         Having heart disease – yes or 1
·         Cholesterol level above normal – 2
·         Blood pressure -4 etc.
As you can see the dependent column had binomial values 0 or 1.
If you take a note of the pattern I have deliberately made the 4th multiple row to be 0 so that I get a probability of 0.75 (3 cases out of 4 are having heart disease). I had approximately 630 data with such entry.

4.       Output File Format

The output that we achieved was in this format:
Result for per row,ProbabilityLikelihood,log likelihood, Error Gradient,Learning Rate
Resultant Headers for LR Classifier: Total Log Likelihood ,Deviance,Mean Error,Max Error
To us this huge output file was making no sense. As this was having a csv format we thought to depict graphical charts out of this.
Surprisingly, we got unique continuous patterns for the above outputs

Learning Rate

Log Likelihood


(Approximately coming to 0.75)


Now after going through the charts the accuracy of the results were verified. Learning had an exponential decay, Log Likelihood was tending to zero, Probability was tending to 0.75 and error rates also were started to converge. Given a huge data of almost 1TB, may be the observation would have shown more clear patterns till saturation limit.

I would like to acknowledge Cesar Souza discussing with whom really helped us to deep dive in the problem statement.

Please feel free to post your view and suggestion to this.


  1. Hi Nabarun! This is very great. Will you be sharing your implementation someday, or is it a private work? Nevertheless, your work seems very interesting. I also have researched heart disease in the past, although
    with neural networks rather than logistic regression.

    Best regards,

  2. Hi Cesar,
    Unfortunately this is not a private work. This was a customer oriented project and had some non disclosure agreement. However currently I am studying Random forest implementation, I will more than happy if I can contribute to
    I have no doubt your implementation with neural networks will be equally interesting as your other works. It is also great to see migrated to codeplex. Great going Cesar.


Post a Comment

Popular posts from this blog

Firebase authentication with Ionic creator

Big Data - SWOT Analysis

LINKEDIN api call using NODE.JS OAUTH module