Call WhatsApp Enquiry

Data Science for working professionals

To secure a job in any domain one has to give it a lot of preparation, should be trained for the role and should have absolute knowledge about the field, usually people will dedicate years in preparing for their desired roles. Shifting from a prepared role of domain to a different domain will not usually be easy, strong gust of skepticism would surely haunt. The process of shifting from one domain to another is hard, it gets harder to learn data science for working professionals because they will have to prepare for the new job role while maintaining their current one.

If and only if you plan the whole process of domain shifting in an organised and rational way, you can have a win-win situation.

Have a vision and plan your strategy

You must win in both the games of learning and working, for that you will have to strategize in such a way that your time in learning data science should not in any way collide with your work life and vice-versa. Because both of the activities are equally important as they require immense attention and individual preference.

let us start from the scratch, here are some possible concerns of a working professional:

  1. Time management
  2. Balancing the energy between two activities
  3. Scheduling
  4. Risk of affording a wrong move
  5. Risk of inefficient or improper execution

As a working professional you will have to manage your responsibilities in a way that you will have control over every single thing that happens to exist. With proper planning and the right way of approach, the above mentioned concerns could be easily tamed.

Firmly state your purpose of learning data science
Why do you want to change your domain into Data Science while you already have a job? firmly define the purpose. You should know that by shifting to data science everything will change, you will have to develop new skill sets for the role that you are targeting, processing of workflow will be different, your future job role will have different goals, purpose and aim. Act consciously when you are risking to give up on the comfort and expertise you have in your current job, be very sure about the purpose of doing so. Doing this will eliminate the skepticism about the risk of getting out of your comfort zone. The efforts that you put over learning Data Science will never go in vain because you will learn about the currently trending technologies and tools, that will help you survive not only in data science but anywhere in the IT firm.

Have a soft target
People think only the role of ‘data scientist’ matters the most but the fact is that there are several other roles in data science which significantly matter in the field, choose one role that which you want to become and start preparing for it. Doing this should be good for the starters, because you do not have to be a scholar in every tool that has ever been used in the field, smartly target those topics that are the essentials in Data Science. When you specifically work on a targeted role you will have the chance to completely know about it and its importance in the field. This way of approach will be a very smart move because you will not be confused regarding what exactly to study in the vast field of data science and the field generally prioritizes those who holds master expertise in specified field. So be very sure about the role you want to serve in, in data science.

Plan the execution
To perfectly plan the execution part you will first have to design the implementation part, do it wise and rationally. Revise your daily-life activities, reschedule it for the sake of balancing between learning and working.

Exercise on the way you spend time on everyday things, revise it according to your daily schedules. Practice to make a note of your tasks everyday, according to that plan on how much time you would invest on the things and try your best to act as decided. In other words, this way of dealing with the things is called as discipline, to have a structured day you will have to practice discipline in all possible ways. Revise your activities from sleeping habits to break sessions, reschedule them in such a way that the things will itself fall in the right place. Set targets, set your own deadlines and design the way that you want things to work in.

Networking and understanding the field
Involve with the people that come from the field of Data Science, know about the insider story of the field and about how it works. Having field knowledge is very much necessary, remember that when you get into data science you will have to work in teams, so practice skills in communication and confidence. Get interactive with the people by asking them about the ways to reach to the field, this way you will build good connections and will get great suggestions as well. Start associating yourself with the people who belongs to Data science, you will need to get used to that.

A good course
Everything that you do and every effort that you put is only to learn Data Science, but if you make the mistake of choosing a wrong course every effort of yours will go in vain. Your purpose of learning Data Science is to shift your domain into that of Data science, you cannot do this without the help of a good course. The course that you choose should not only help you to have fine knowledge in data science but also should help you to manage your planned schedules. There are many data science courses that are specially built for working professionals, it will greatly help if you choose the right one among them.

Conclusion
With the right approach and proper planning you can triumph in learning Data Science while maintaining a full time job. Stick to your plans and preparations, seek help from a good course, practice as much as you could and start involving yourself with the field. If you manage to everyday execute the plans you will surely reach your destination in ease.

Learnbay could help you
The data science course of Learnbay is specially designed for working professionals, the benefits provided in the course will help you balance your scheduling. Learnbay powered by IBM will help you throughout the journey of learning and experiencing data science.

Regression techniques in Machine Learning

Machine learning has become the sexiest and very trendy technology in this world of technologies, Machine learning is used every day in our life such as Virtual assistance, for making future predictions, Videos surveillance, Social media services, spam mail detection, online customer support, search engine resulting prediction, fraud detection, recommendation systems, etc. In machine learning, Regression is the most important topic that needed to be learned. There are different types of Regression techniques which we will know in this article.

Introduction:

Regression algorithms such as Linear regression and Logistic regression are the most important algorithms that people learn while they study about Machine learning algorithms. There are numerous forms of regression that are used to perform regression and each has its own specific features, that are applied accordingly. The regression techniques are used to find out the relationship between the dependent and independent variables or features. It is a part of data analysis that is used to analyze the infinite variables and the main aim of this is forecasting, time series analysis, modeling.

What is Regression?

Regression is a statistical method that mainly used for finance, investing and sales forecasting, and other business disciplines that make attempts to find out the strength and relationship among the variables.

There are two types of the variable into the dataset for apply regression techniques:

  1. Dependent Variable that is mainly denoted as Y
  2. Independent variable that is denoted as x.

And, There are two types of regression

  1. Simple Regression: Only with a single independent feature /variable
  2. Multiple Regression: With two or more than two independent features/variables.

Indeed, in all regression studies, mainly seven types of regression techniques are used firmly for complex problems.

  • Linear regression
  • Logistics regression
  • Polynomial regression
  • Stepwise Regression
  • Ridge Regression
  • Lasso Regression

Linear regression:

It is basically used for predictive analysis, and this is a supervised machine learning algorithm. Linear regression is linear approach to modeling the relationship between scalar response and the parameters or multiple predictor variables. It focuses on the conditional probability distribution. The formula for linear regression is Y = mX+c.

Where Y is the target variable, m is the slope of the line, X is the independent feature, and c is the intercept.

Simple Linear Regression in Machine learning - Javatpoint

Additional points on Linear regression:

  1. There should be a linear relationship between the variables.
  2. It is very sensitive to Outliers and can give a high variance and bias model.
  3. The problem of occurring multi colinearity with multiple independent features

Logistic regression:

It is used for classification problems with a linear dataset. In layman’s term, if the depending or target variable is in the binary form (1 0r 0), true or false, yes or no. It is better to decide whether an occurrence is possibly either success or failure.

 

Logistic Regression

Additional point:

  1. It is used for classification problems.
  2. It does not require any relation between the dependent and independent features.
  3. It can after by the outliers and can occur underfitting and overfishing.
  4. It needs a large sample size to make the estimation more accurate.
  5. It needs to avoid collinearity and multicollinearity.

Polynomial regression:

The polynomial regression technique is used to execute a model that is suitable for handling non-linear separated data. It gives a curve that is best suited to data points, rather than a straight line.
The polynomial regression suits the least-squares form. The purpose of an analysis of regression to model the expected y value for the independent x of the dependent variable. 
The formula for this Y=  β0+ β0x1+e
Polynomial Regression - Towards Data Science
Additional  features: 
Look particularly for curve towards the ends to see if those shapes to patterns make logical sense. More polynomials can lead to weird extrapolation results. 

Step-wise Regression:

It is used for statistical model fitting regression with predictive models. It is done automatically. 
The variable is supplemented or removed from the explanatory variable set at every step. The main approaches for the regression are reverse elimination and bidirectional elimination and step by step approaches. 
The formula for this: b = b(sxi/sy)
Additional points: 
  1. This regression provides two things, the very first one is to add prediction for each steep and remove predictors fro each step.
  2. It starts with the most significant predictor into the ML model and then adds features for each step.
  3. The backward elimination starts with all the predictors into the model and then removes the least significant variable.

Ridge Regression: 

It is a method that used when the dataset having multicollinearity which means, the independent variables are strongly related to each other. Although the least-squares estimates are unbiased in multicollinearity, So after adding the degree of bias to the regression, ridge regression can reduce the standard errors.
Ridge Regression for Better Usage - Towards Data Science

Additional points:

  1. In this regression, normality is not to be estimated the same as Least squares regression.
  2. In this regression, the value could be varied but doesn’t come to zero.
  3. This uses the l2 regularization method as it is also a regularization method.

Lasso Regression:

Lasso is an abbreviation of the Least Absolute shrinkage and selection operator. This is similar to the ridge regression as it also analyzes the absolute size of the regression coefficients. And the additional features of that are it is capable of reducing the accuracy and variability of the coefficients of the Linear regression models.

Lasso regression in matlab - Stack Overflow

 

Additional points: 
  1. Lasso regression shrinks the coefficients aero, which will help in feature selection for building a proper ML model.
  2. It is also a regularization method that uses l1 regularization.
  3. If there are many correlated features, it picks only one of them and shrinks it to the zero.

 

Learnbay provides industry accredited data science courses in Bangalore. We understand the conjugation of technology in the field of Data science hence we offer significant courses like Machine learning, Tensor Flow, IBM Watson, Google Cloud platform, Tableau, Hadoop, time series, R, and Python. With authentic real-time industry projects. Students will be efficient by being certified by IBM. Around hundreds of students are placed in promising companies for data science roles. Choosing Learnbay you will reach the most aspiring job of present and future.
Learnbay data science course covers Data Science with Python, Artificial Intelligence with Python, Deep Learning using Tensor-Flow. These topics are covered and co-developed with IBM.

Model vs Algorithm in ML

Machine Learning works with “models” and “algorithms”, and both play an important role in machine learning where the algorithm tells about the process and model is built by following those rules.

Algorithms have derived by the statistician or mathematician very long ago and those algorithms are studies and applied by the individuals for their business purposes.

A model in machine learning nothing but a function that is used to take some certain input, perform a certain operation which is told by algorithms to its best on the given input, and gives a suitable output.

Some of the machine learning algorithms are:

  1. Linear regression
  2. Logistic regression
  3. Decision tree
  4. Random forest
  5. K-nearest neighbor
  6. K-means learning

What is an algorithm in Machine learning?

An algorithm is a step by step approach powered by statistics that guides the machine learning in its learning process. An algorithm is nothing but one of the several components that constitute a model.

There are several characteristics of machine learning algorithms:

  1. Machine learning algorithms can be represented by the use of mathematics and pseudo code.
  2. The effectiveness of machine learning algorithms can be measured and represented.
  3. With any of the popular programming languages, machine learning algorithms can be implemented.

What is the Model in Machine learning?

The model is dependent on factors such as features selection, tuning parameters, cost functions along with the algorithm the model just not fully dependent on algorithms.

Model is the result of an algorithm when we implement the algorithm with the code when we train the algorithms with the real data. A model is something that tells what your program learned from the data by following the rules of those algorithms. The model is used to predict the future result that is observed by the algorithm implementation of small data.

                Model = Data + Algorithm 

A model contains four major steps that are:

  1. Data preprocessing
  2. Feature engineering
  3. Data management
  4. performance measurement.

How the model and algorithms work together in machine learning?

For example:

y = mx+c is an equation for a line where m is the slope of the line and c is the y-intercept, this is nothing but linear regression with only one variable.
similarly, the decision tree and random forest have something like the Gini index and K-nearest having Euclidean distance formula.

So take the linear regression algorithm:

  1. Start with a training set with x1, x2,…, and y.
  2. Find out the parameters c0, c1, c2 with the random variables.
  3. Find out the learning rate alpha
  4. Then repeat the following updates such as c0 = co-alpha +h(x)-y and for c1, c2 also.
  5. Repeat these processes till converged.

when you employing this algorithm, you are employing these exact 5 steps in your model without changing the steps, your model initiated by the algorithm and also treat all the dataset same.

If you want to apply that algorithm to the model, the model finds out the value of m and c that we don’t know, then how will you find out?
suppose you have 3 variables that are having values of x and y now your model will find the value of m1, m2, m3, and c1, c2, c3 for three variables.
The model will work with three slopes and three intercepts to find out the result of the dataset to predict the future.

The “algorithm” might be treating all the data the same but it is the “model” that actually solves the problems. An algorithm is something that you use to train the model on the data.

After building a model, a data science enthusiasts test it to get the accuracy of that model and fine-tuning to improve the results.

This article may help you yo understand about the algorithm and model in Machine learning, In summary, an algorithm is a process or a technique that we follow to get the result or to find the solution of a problem.
And a model is a computation or a formula that formed as an output of an algorithm that takes some input, so you can say that you are building a model using a given algorithm.

 

Learnbay provides industry accredited data science courses in Bangalore. We understand the conjugation of technology in the field of Data science hence we offer significant courses like Machine learning, Tensor Flow, IBM Watson, Google Cloud platform, Tableau, Hadoop, time series, R, and Python. With authentic real-time industry projects. Students will be efficient by being certified by IBM. Around hundreds of students are placed in promising companies for data science roles. Choosing Learnbay you will reach the most aspiring job of present and future.
Learnbay data science course covers Data Science with Python, Artificial Intelligence with Python, Deep Learning using Tensor-Flow. These topics are covered and co-developed with IBM.

Win the COVID-19

If you slightly change your perspective towards the lock-down situation you can find hope of this pandemic to end and can hope of a brighter than ever future. Go for Data Science, it will be worth it.

text stemming in NLP

Human language is an unsolved problem that there are more than 6500 languages worldwide. The tons of data are generating every day as we speak, we text, we tweet, from voice to text on every social application and o get the insights of these text data we need technology as NLP. If you know there are two types of data are there one is structured and unstructured data. Structured data used for Machine learning models and unstructured data is used for Natural language processing. There are only 21% of structured data is available, so now you can estimate how much NLP is required to handle unstructured data. 

To get the insights of the dataset of unstructured data to take out the important information from it. The important technique to analyze the text data is text mining. Text mining is the technique to extract useful information from the unstructured data by identifying and exploring a large amount of text data. Or we can say that text mining is used to convert the unstructured data to the structured dataset.

Normalization, lemmatization, stemming, tokenization is the technique in NLP to get out the insights from the data.

Now we will see how text stemming works?

Stemming is the process of reducing inflection in words to their “root” forms such as mapping a group of words to the same stem. Stem words mean the suffix and prefix that have added to the root word. It is the process to produce grammatically variants of root words.  A stemming is provided by the NLP algorithms that are stemming algorithms or stemmers. The stemming algorithm removes the stem from the word. For example, eats, eating, eatery, they are made from the root word “eat“. so here the stemmer removes s, ing, very from the above words to take out meaning that the sentence is about eating something. The words are nothing but different tenses forms of verbs.

This is the general idea to reduce the different forms of the word to their root word.
Words that are derived from one another can be mapped to a base word or symbol, especially if they have the same meaning.

As we can not sure that it will give us a 100% result so we have two types of error in stemming they are: over stemming and under stemming.

Over stemming occurs when there are too many words have cut out.
This could be known as non-sensical items, where the meaning of the word has lost, or it can not be able to distinguish between two stems or resolve the same stem where they should differ from each other.

For example, take out the four words university, universities, universal, and universe. A stemmer that resolves these four stems to “Univers” that is over stemming. It should be the universe stemmer that stemmed together and university, universities stemmed together they all four are not fit for the single stem.

Under stemming: Under-stemming is the opposite of stemming. It comes from when we have different words that actually are forms of one another. It would be nice for them to all resolve to the same stem, but unfortunately, they do not.

This can be seen if we have a stemming algorithm that stems from the words data and datum to “dat” and “datu.” And you might be thinking, well, just resolve these both to “dat.” However, then what do we do with the date? And is there a good general rule? So there under stemming occurs.

Learnbay provides industry accredited data science courses in Bangalore. We understand the conjugation of technology in the field of Data science hence we offer significant courses like Machine learning, Tensor Flow, IBM Watson, Google Cloud platform, Tableau, Hadoop, time series, R and Python. With authentic real-time industry projects. Students will be efficient by being certified by IBM. Around hundreds of students are placed in promising companies for data science roles. Choosing Learnbay you will reach the most aspiring job of present and future.
Learnbay data science course covers Data Science with Python, Artificial Intelligence with Python, Deep Learning using Tensor-Flow. These topics are covered and co-developed with IBM.

What is Supervised, unsupervised learning, and reinforcement learning in Machine learning

The supervised learning algorithm is widely used in the industries to predict the business outcome, and forecasting the result on the basis of historical data. The output of any supervised learning depends on the target variables. It allows the numerical, categorical, discrete, linear datasets to build a machine learning model. The target variable is known for building the model and that model predicts the outcome on the basis of the given target variable if any new data point comes to the dataset.

The supervised learning model is used to teach the machine to predict the result for the unseen input. It contains a known dataset to train the machine and its performance during the training time of a model. And then the model predicts the response of testing data when it is fed to the trained model. There are different machine learning models that are suitable for different kinds of datasets. The supervised algorithm uses regression and classification techniques for building predictive models.

For example, you have a bucket of fruits and there are different types of fruits in the bucket. You need to separate the fruits according to their features and you know the name of the fruits follow up its corresponding features the features of the fruits are independent variables and name of fruits are dependent variable that is out target variable. We can build a predicting model to determine the fruit name.

There are various types of Supervised learning:

  1. Linear regression
  2. Logistic regression
  3. Decision tree
  4. Random forest
  5. support vector machine
  6. k-Nearest neighbors

Linear and logistic regression is used when we have continuous data. Linear regression defines the relationship between the variables where we have independent and dependent variables. For example, what would be the performance percentage of a student after studying a number of hours? The numbers of hours are in an independent feature and the performance of students in the dependent features. The linear regression is also categorized in types
those are simple linear regression, multiple linear regression, polynomial regression. 

Classification algorithms help to classify the categorical values. It is used for the categorical values, discrete values, or the values which belong to a particular class. Decision tree and Random forest and KNN all are used for the categorical dataset. Popular or major applications of classification include bank credit scoring, medical imaging, and speech recognition. Also, handwriting recognition uses classification to recognize letters and numbers, to check whether an email is genuine or spam, or even to detect whether a tumor is benign or cancerous and for recommender systems.

The support vector machine is used for both classification and regression problems. It uses the regression method to create a hyperplane to classify the category of the datapoint. sentiment analysis of a subject is determined with the help of SVM whether the statement is positive or negative.

Unsupervised learning algorithms

Unsupervised learning is a technique in which we need to supervise the model as we have not any target variable or labeled dataset. It discovers its own information to predict the outcome. It is used for the unlabeled datasets. Unsupervised learning algorithms allow you to perform more complex processing tasks compared to supervised learning. Although, unsupervised learning can be more unpredictable compared with other natural learning methods. It is easier to get unlabeled data from a computer than labeled data, which needs manual intervention.

For example, We have a bucket of fruits and we need to separate them accordingly, and there no target variable available to determine whether the fruit is apple, orange, or banana. Unsupervised learning categorizes these fruits to make a prediction when new data comes.

Types of unsupervised learning:

  1. Hierarchical clustering
  2. K-means clustering
  3. K-NN (k nearest neighbors)
  4. Principal Component Analysis
  5. Singular Value Decomposition
  6. Independent Component Analysis

Hierarchical clustering is an algorithm that builds a hierarchy of clusters. It begins with all the data which is assigned to a cluster of their own. Here, two close clusters are going to be in the same cluster. This algorithm ends when there is only one cluster left.

K-means and KNN is also a clustering method to classify the dataset. k-means is an iterative method of clustering and also used to find the highest value for every iteration, we can select the numbers of clusters. You need to define the k cluster for making a good predictive model. K- nearest neighbour is the simplest of all machine learning classifiers. It differs from other machine learning techniques, in that it doesn’t produce a model. It is a simple algorithm that stores all available cases and classifies new instances based on a similarity measure.

PCA(Principal component analysis) is a dimensionality reduction algorithm. For example, you have a dataset with 200 of the features/columns. You need to reduce the number of features for the model with only an important feature. It maintains the complexity of the dataset.

Reinforcement learning is also a type of Machine learning algorithm. It provides a suitable action in a particular situation, and it is used to maximize the reward. The reward could be positive or negative based on the behavior of the object. Reinforcement learning is employed by various software and machines to find the best possible behavior in a situation.

Main points in Reinforcement learning –

  • Input: The input should be an initial state from which the model will start
  • Output: There are much possible output as there are a variety of solution to a particular problem
  • Training: The training is based upon the input, The model will return a state and the user will decide to reward or punish the model based on its output.
  • The model keeps continues to learn.
  • The best solution is decided based on the maximum reward.

Learnbay provides industry accredited data science courses in Bangalore. We understand the conjugation of technology in the field of Data science hence we offer significant courses like Machine learning, Tensor Flow, IBM Watson, Google Cloud platform, Tableau, Hadoop, time series, R and Python. With authentic real-time industry projects. Students will be efficient by being certified by IBM. Around hundreds of students are placed in promising companies for data science roles. Choosing Learnbay you will reach the most aspiring job of present and future.
Learnbay data science course covers Data Science with Python, Artificial Intelligence with Python, Deep Learning using Tensor-Flow. These topics are covered and co-developed with IBM.

Decision Tree

Decision tree:

The decision tree is the classification algorithm in ML(machine learning). A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains conditional control statements.

To understand the algorithm of the decision tree we need to know about the classification.

What is Classification?

Classification is the process of dividing the datasets into different categories or groups by adding a label. It adds the data point to a particular labeled group on the basis of some condition.

As we see in daily life there are three categories in an email(Spam, Promotions, Personal) they are classified to get the proper information. Here decision tree is used to classify the mail type and fix it the proper one.

Types of classification 

  • DECISION TREE
  • RANDOM FOREST
  • NAIVE BAYES
  • KNN

Decision tree:

  1. Graphical representation of all the possible solutions to a decision.
  2. A decision is based on some conditions.
  3. The decision made can be easily explained.

There are following steps to get a decision with the decision tree

1. Entropy:

Entropy is basically used to create a tree. We find our entropy from attribute or class. A decision tree is built top-down from a root node and involves partitioning the data into subsets that contain instances with similar values (homogeneous). ID3 algorithm uses entropy to calculate the homogeneity of a sample.

2.Information Gain:

The information gain is based on the decrease in entropy after a data-set is split on an attribute. Constructing a decision tree is all about finding an attribute that returns the highest information gain.

  • The information gain is based on the decrease in entropy after a dataset is split on an attribute.
  • Constructing a decision tree is all about finding an attribute that returns the highest information gain (i.e., the most homogeneous branches).
  • Gain(S, A) = Entropy(S) – ∑ [ p(S|A) . Entropy(S|A) ]
  • We intend to choose the attribute, splitting by which information gain will be the most
  • Next step is calculating information gain for all attributes
Here the short example of a Decision tree:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
play_data=pd.read_csv('data/tennis.csv.txt')
print(play_data)
play_data=pd.read_csv('data/tennis.csv.txt')
play_data

Output:

outlook temp humidity windy play
0 sunny hot high False no
1 sunny hot high True no
2 overcast hot high False yes
3 rainy mild high False yes
4 rainy cool normal False yes
5 rainy cool normal True no
6 overcast cool normal True yes
7 sunny mild high False no
8 sunny cool normal False yes
9 rainy mild normal False yes
10 sunny mild normal True yes
11 overcast mild high True yes
12 overcast hot normal False yes
13 rainy mild high True no 

Entropy of play:

  • Entropy(play) = – p(Yes) . log2p(Yes) – p(No) . log2p(No)

play_data.play.value_counts()
Entropy_play=-(9/14)*np.log2(9/14)-(5/14)*np.log2(5/14)
print(Entropy_play)

output:
0.94028595867063114

Information Gain on splitting by Outlook

  • Gain(Play, Outlook) = Entropy(Play) – ∑ [ p(Play|Outlook) . Entropy(Play|Outlook) ]
  • Gain(Play, Outlook) = Entropy(Play) – [ p(Play|Outlook=Sunny) . Entropy(Play|Outlook=Sunny) ] – [ p(Play|Outlook=Overcast) . Entropy(Play|Outlook=Overcast) ] – [ p(Play|Outlook=Rain) . Entropy(Play|Outlook=Rain) ]

play_data[play_data.outlook == 'sunny'] 

# Entropy(Play|Outlook=Sunny)
Entropy_Play_Outlook_Sunny =-(3/5)*np.log2(3/5) -(2/5)*np.log2(2/5)
Entropy_Play_Outlook_Sunny
play_data[play_data.outlook == 'overcast'] # Entropy(Play|Outlook=overcast)
# Since, it's a homogenous data entropy will be 0
play_data[play_data.outlook == 'rainy'] # Entropy(Play|Outlook=rainy)
Entropy_Play_Outlook_Rain = -(2/5)*np.log2(2/5) - (3/5)*np.log2(3/5)
print(Entropy_play_Outlook_Rain)
# Entropy(Play_Sunny|)
Entropy_Play_Outlook_Sunny =-(3/5)*np.log2(3/5) -(2/5)*np.log2(2/5)
#Gain(Play, Outlook) = Entropy(Play) – [ p(Play|Outlook=Sunny) . Entropy(Play|Outlook=Sunny) ] –
#[ p(Play|Outlook=Overcast) . Entropy(Play|Outlook=Overcast) ] – [ p(Play|Outlook=Rain) . Entropy(Play|Outlook=Rain) ]

Other gains

  • Gain(Play, Temperature) – 0.029
  • Gain(Play, Humidity) – 0.151
  • Gain(Play, Wind) – 0.048

Conclusion – Outlook is winner & thus becomes root of the tree

Time to find the next splitting criteria

play_data[play_data.outlook == 'overcast'] play_data[play_data.outlook == 'sunny'] # Entropy(Play_Sunny|)
Entropy_Play_Outlook_Sunny =-(3/5)*np.log2(3/5) -(2/5)*np.log2(2/5)
print(Entropy_Play_Outlook_Sunny)
# Entropy(Play_Sunny|)
Entropy_Play_Outlook_Sunny =-(3/5)*np.log2(3/5) -(2/5)*np.log2(2/5)
print(Entropy_Play_Outlook_Sunny)

Information Gain for humidity

#Entropy for attribute high = 0, also entropy for attribute normal = 0
Entropy_Play_Outlook_Sunny - (3/5)*0 - (2/5)*0 

Information Gain for windy

  • False -> 3 -> [1+ 2-]
  • True -> 2 -> [1+ 1-]

Entropy_Wind_False = -(1/3)*np.log2(1/3) - (2/3)*np.log2(2/3)
print(Entropy_Wind_False)
Entropy_Play_Outlook_Sunny - (3/5)* Entropy_Wind_False - (2/5)*1  

Information Gain for temperature

  • hot -> 2 -> [2- 0+]
  • mild -> 2 -> [1+ 1-]
  • cool -> 1 -> [1+ 0-]

Entropy_Play_Outlook_Sunny - (2/5)*0 - (1/5)*0 - (2/5)* 1]

Conclusion : Humidity is the best choice on sunny branch:

play_data[(play_data.outlook == 'sunny') & (play_data.humidity == 'high')] 

Output:

outlook temp humidity windy play
0 sunny hot high False no
1 sunny hot high True no
7 sunny mild high False no 

play_data[(play_data.outlook == 'sunny') & (play_data.humidity == 'normal']

Output:
outlook temp humidity windy play
8 sunny cool normal False yes
10 sunny mild normal True yes

Splitting the rainy branch:

play_data[play_data.outlook == 'rainy'] # Entropy(Play_Rainy|)
Entropy_Play_Outlook_Rainy =-(3/5)*np.log2(3/5) -(2/5)*np.log2(2/5)outlook temp humidity windy play
3 rainy mild high False yes
4 rainy cool normal False yes
5 rainy cool normal True no
9 rainy mild normal False yes
13 rainy mild high True no 

Information Gain for temp

  • mild -> 3 [2+ 1-]
  • cool -> 2 [1+ 1-]

Entropy_Play_Outlook_Rainy - (3/5)*0.918 - (2/5)*1

Output:
0.020150594454668602

Information Gain for Windy:

Entropy_Play_Outlook_Rainy - (2/5)*0 - (3/5)*0

Output:
0.97095059445466858 

Information Gain for Humidity

  • High -> 2 -> [1+ 1-]
  • Normal -> 3 -> [2+ 1-]

Entropy_Play_Outlook_Rainy_Normal = -(1/3)*np.log2(1/3) - (2/3)*np.log2(2/3)
Entropy_Play_Outlook_Rainy_Normal
Entropy_Play_Outlook_Rainy - (2/5)*1 - (3/5)*Entropy_Play_Outlook_Rainy_Normal
Entropy_Play_Outlook_Rainy_Normal
Entropy_Play_Outlook_Rainy_Normal

Output:
0.91829583405448956
0.019973094021974891 

Final tree:

Decision trees are popular among non-statisticians as they produce a model that is very easy to interpret. Each leaf node is presented as an if/then rule. Cases that satisfy the if/then the statement is placed in the node. Are non-parametric and therefore do not require normality assumptions of the data. Parametric models specify the form of the relationship between predictors and response. An example is a linear relationship for regression. In many cases, however, the nature of the relationship is unknown. This is a case in which non-parametric models are useful. Can handle data of different types, including continuous, categorical, ordinal, and binary. Transformations of the data are not required. It can be useful for detecting important variables, interactions, and identifying outliers. It handles missing data by identifying surrogate splits in the modeling process. Surrogate splits are splitting highly associated with the primary split. In other models, records with missing values are omitted by default.

Learnbay provides industry accredited data science courses in Bangalore. We understand the conjugation of technology in the field of Data science hence we offer significant courses like Machine learning, Tensor Flow, IBM Watson, Google Cloud platform, Tableau, Hadoop, time series, R and Python. With authentic real-time industry projects. Students will be efficient by being certified by IBM. Around hundreds of students are placed in promising companies for data science roles. Choosing Learnbay you will reach the most aspiring job of present and future.
Learnbay data science course covers Data Science with Python, Artificial Intelligence with Python, Deep Learning using Tensor-Flow. These topics are covered and co-developed with IBM.

Understanding different job positions of Data Science

Data Science provides so much more job position opportunities than just “Data Scientist”, which are as exciting and interesting as Data Science.

You could be a pro in Data Science by Self Assisting

Learning Data Science is little tricky but here you may find something important!

Differentiating Data Scientist and Data Analyst

There is a pensive difference between Data Scientist and Data Analyst. It is so much interesting to know about them all.

#iguru_button_61747fd82b315 .wgl_button_link { color: rgba(255,255,255,1); }#iguru_button_61747fd82b315 .wgl_button_link:hover { color: rgba(255,255,255,1); }#iguru_button_61747fd82b315 .wgl_button_link { border-color: transparent; background-color: rgba(255,149,98,1); }#iguru_button_61747fd82b315 .wgl_button_link:hover { border-color: rgba(230,95,42,1); background-color: rgba(253,185,0,1); }#iguru_button_61747fd82c89a .wgl_button_link { color: rgba(255,255,255,1); }#iguru_button_61747fd82c89a .wgl_button_link:hover { color: rgba(255,255,255,1); }#iguru_button_61747fd82c89a .wgl_button_link { border-color: rgba(218,0,0,1); background-color: rgba(218,0,0,1); }#iguru_button_61747fd82c89a .wgl_button_link:hover { border-color: rgba(218,0,0,1); background-color: rgba(218,0,0,1); }#iguru_button_61747fd830b28 .wgl_button_link { color: rgba(241,241,241,1); }#iguru_button_61747fd830b28 .wgl_button_link:hover { color: rgba(250,249,249,1); }#iguru_button_61747fd830b28 .wgl_button_link { border-color: rgba(102,75,196,1); background-color: rgba(48,90,169,1); }#iguru_button_61747fd830b28 .wgl_button_link:hover { border-color: rgba(102,75,196,1); background-color: rgba(57,83,146,1); }#iguru_soc_icon_wrap_61747fd8432ae a{ background: transparent; }#iguru_soc_icon_wrap_61747fd8432ae a:hover{ background: transparent; border-color: #3aa0e8; }#iguru_soc_icon_wrap_61747fd8432ae a{ color: #acacae; }#iguru_soc_icon_wrap_61747fd8432ae a:hover{ color: #ffffff; }#iguru_soc_icon_wrap_61747fd8432ae { display: inline-block; }.iguru_module_social #soc_icon_61747fd8432de1{ color: #ffffff; }.iguru_module_social #soc_icon_61747fd8432de1:hover{ color: #ffffff; }.iguru_module_social #soc_icon_61747fd8432de1{ background: #44b1e4; }.iguru_module_social #soc_icon_61747fd8432de1:hover{ background: #44b1e4; }
Get The Learnbay Advantage For Your Career
Note : Our programs are suitable for working professionals(any domain). Fresh graduates are not eligible.
Overlay Image
GET THE LEARNBAY ADVANTAGE FOR YOUR CAREER
Note : Our programs are suitable for working professionals(any domain). Fresh graduates are not eligible.