100% Real Microsoft Certified: Azure Data Scientist Associate Certification Exams Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate.
Designing and Implementing a Data Science Solution on Azure
Includes 454 Questions & Answers
Microsoft Certified: Azure Data Scientist Associate Certification Exams Screenshots
Download Free Microsoft Certified: Azure Data Scientist Associate Practice Test Questions VCE Files
TitleDesigning and Implementing a Data Science Solution on Azure
Microsoft Certified: Azure Data Scientist Associate Certification Exam Dumps & Practice Test Questions
Prepare with top-notch Microsoft Certified: Azure Data Scientist Associate certification practice test questions and answers, vce exam dumps, study guide, video training course from ExamCollection. All Microsoft Certified: Azure Data Scientist Associate certification exam dumps & practice test questions and answers are uploaded by users who have passed the exam themselves and formatted them into vce file format.
Hello and welcome. In the previous section, we covered various modules that we can use for data transformation. Today we are going to build our first model using logistic regression. So what is logistic regression? It is used to predict the probability of an outcome. The outcome can be binary, such as yes, no, or will this customer buy my product? Or multiple outcomes, such as what will be the quality of wine—will it be low grade, high grade, or average? Or will this email be a primary, promotional, or social email? The point to note is that it's a supervised learning method, and we must provide a data set that already contains the outcomes to train the model. Alright, so what does that mean? Before we start building the model, let's first try to understand how the equation for logistic regression has been derived. As you know, a simple regression may have an equation such as y equal to b zero plus b one x. In a previous example of number of claims versus age of the insured, we got an equation such as number of claims equals 18 plus b times the age, which gives us a line that passes through the points. And if we get a new data point with an age value, we can predict the number of claims using this regression line. Depending on the equation, the line can take any shape or form. But what if the data is not as we expected? And the question we are trying to answer has only two possible outcomes, such as what is the probability that this customer will buy my product? If we draw a line like this for the datapoints, it will not give us the correct result. Alright, so let's take a step back and ask ourselves, What is the question that we are trying to answer here? Well, the question is, what is the probability? As we all know, probability is always positive and is always less than or equal to one. But our equation here is y equal to b zero plus b one x, and y here can take any value, including a negative value. But probability should always be positive. So what do we do about it? Well, let's try to solve it and see where we land up. So what we do is apply an exponential function on both sides of the equation. This should take care of the first requirement, which is that it should be positive as the exponent of this equation will never be negative. Alright. The second set of conditions for probability is that it has to be between zero and one. How do we do that? Well, it's very simple. To make it less than one, we simply divide this further by a number that is greater than this particular value. As a result, we simply add one to the exponent of y and divide the entire equation by the same number. That gives us the probability of success. And if this is the probability of success, then the probability of failure would be one minus P, because the overall probability is always one, and hence our equation now looks like this. If we solve this further, our probability of failure will be one on the exponent of Y plus one. Okay? And I ask you to solve it further, which will basically give you P upon one minus P as equal to the exponent of Y. Next, we perform the log function on both sides, which will cancel the exponent. And Y is nothing more than the log of P uponone minus P, which is nothing more than b zero plus B one X. And now, when we plot this on the graph, it looks like this, which represents our data much more accurately than a simple linear straight line. Now, we can predict that the data points that are above a particular threshold, let's say zero five, will be positive and the others will be negative. The threshold value can be decided based on the business application. And what is the cost of finding false positives or false negatives? What do I mean by that? Well, if you are trying to find out whether a particular transaction is fraud or not, you may want to set this threshold a little low. That may lead to some false positives. However, it is okay to get some false alarms rather than allowing a fraudulent transaction to pass through, where the cost will be much higher. I hope that clears up the concepts behind logistic regression in the next lecture. Let's build our first model using logistic regression in Azure ML. So, log into the Azure ML Studio, as I want you to do this along with me. Thank you so much for joining me in this one, and I'll see you in the next lecture. Until then, enjoy your time.
Hello and welcome to the Azure ML course. I'm sure you're excited to build your first model in Azure ML. Before that, let's first try to understand the problem that we are trying to solve here. A finance company wants to automate their loan eligibility process in real time based on the customer details provided while filling out the online application form. And, in order to automate this process, they have created a problem of identifying customer segments—those who are eligible for loan amounts—so that the company can specifically target these customers and the loan approval process can be sped up for them. The past data for the online applications and their outcomes has been provided. And using this historical information, we are going to build a model that will predict whether the loan application will get approved or not. This problem has been posted online for one of the Analytics competitions with you. All right, let's go to the Azure ML Studio and have a look at our data set. Well, for building this model, download the prediction data set from the course resources and upload it to Azurable. You may want to pause the video while you do that. OK, so I have already uploaded it, and let's search for it. Let's drag and drop this dataset and first visualise it, and let's take a look at the data that we are dealing with. Well, as discussed previously, this is a data set that contains various loan applications and their status of whether the loan was approved or not. This dataset has 13 columns of features and 614 rows. Let's look at them one by one. Loan ID is a unique number for every application. Then there is gender, marital status, education qualification, and the number of dependents of the applicant. Then what is the education level, income, loan amount, credit history, and so on? Let's look at each of these one by one. Gender has two unique values: male or female. However, it has some missing values as well. And the type of variable or feature is a string. The second column that is married has two unique values of yes or no and three missing values. You can go on visualising the rest of the columns. In the same way, we have some missing values in the string features of gender, marriage, dependent, and self employed.We also have some missing values in the numeric features of loan amount, loan amount, term, and credit history. However, if you notice that credit history here is not just a score, but a binary value of either zero or one, This is because the company must have decided that they want to consider applicants with a score higher than a particular value as one and others below it as zero. This actually makes it a categorical variable, and hence we will treat it as one. In our experiment, we should clean up this missing data so that our model can work on neat and clean information. So, first and foremost, let us close this and go get our clean missing data. So let's search for it. And there is our beloved, clean, missing data module. Let me drag and drop there and make the right connections. As we saw in the previous sections of data transformation, we will use this module to replace the missing values. So let's first clean up the string features and then launch the column selector. Okay. And let me select a gender. Married, dependents, and self-employed And I'm also going to select credit history. I will explain the reasons in just a few seconds. Before that, let's click OK and come back to our experiment. For these parameters, let's keep the minimum missing value ratio at zero, which means that even if there is one missing value, it will be cleaned. We also want to clean up all the missing values, so we will keep the maximum ratio at one. If you are finding this difficult to understand, I suggest you go through the clean missing data module covered during the lecture on data manipulation and data transformation. All right, let's say we want to replace these values with the most frequent value in the respective column. Hence we select "Replace with" mode. This was the reason that we have selected credit history as well, so that we retain the two unique values and do not get into the trap of being mean. Alright, keeping other parameters as is, let's now run this module. It has run successfully, and it's time to visualise the output. All right, as you can see, the columns we selected do not have any missing values, but as you can see, numeric features still have missing values because we have not yet done any cleanup on numeric features. So one important point that we should note here is that Azure ML applies the same cleaning transformation to all the selected columns. Hence, if we want to perform different cleaning operations on different columns, we would want to run the clean missing data module separately for each of those columns. Okay? So let's see how we can do that for numeric features. So, let me close this first. Let's now clean the numeric features using the similar missing data cleaning module. So I'm simply going to copy and paste this particular module and provide the output data set from the previous transformation as input to the numeric cleanup operation. Because, as previously stated, the original data set is not altered. So because we have cleaned up the string features, the output of this cleaning is nothing but the clean version of the original data set. Hence, we are going to provide the output of the first clean missing data as the input of the second clean missing data. All right, let's now launch the column selector and select numeric features that have missing values. Those were the loan amount and loan term; click OK. Because these are numeric features, we will replace the missing values with the mean or average of all the values in that feature. For now, let's run this module, and hopefully our data will be cleaned. It has run successfully, and let's quickly visualise it. As you can see, the selected columns now do not have any missing values. All right, let me close this, and the next step for us will be to make this data available to the untrained model. However, we are not going to provide all the columns because some of the features here do not have any impact on the outcome or the result. Okay, so let me introduce the select column module, which allows us to select only those features that are relevant for this problem. Let's search for the select column and drag and drop it here. Let's Connect has finally cleaned the data from this second-clean missing data module for the select column module. Let's launch the column selector. Now the big question is, which features should be selected? Well, the loan ID is just a random unique number and hence does not have any impact on whether the loan will be approved or not. Hence it is not part of the predictor or independent variables, as we saw earlier. In cases of gender, it's possible, but it won't be the sole criteria. Marriage may also have an impact, as some banks give higher points to people who are married and have dependents. Same goes for whether the person is self-employed or not, as businesses tend to have higher risks compared to a salaried employee. Okay, in terms of income, well, no gas prices, how much loan he wants, credit history, and so on are obvious. Remember, the credit history here is mentioned as one and zero. So for this particular experiment, we select all but Loan ID and click okay; let's run it. It has been done successfully. You can pause the video and visualise it to see that it has all the features except the loan ID. I'm going to proceed further with the next step. This data set is all we have for now, and as you know, this is a supervised learning problem. So we need to split the data set into training and testing. The training data set will help our model get the required training and come up with a transformation or algorithm using which we can predict the new outcome, whereas the test data set will help us validate our model. So let's search for "split data module" and drag and drop it here. OK, so basically we will split the existing data and use part of the data for training our module and the rest for testing the result as well as validating the results for accuracy. Let's make the correct connections from the output of the previous transformation to the split data module's input. Okay, it requires some parameters, and we'll split it in a 70/30 ratio. Hence, this value will be zero. Seven random seeds are one, two, and three, and let's also do the stratified split on the column loan status so that we have an even distribution of values between the train and test data sets. So let me launch the column selector, select the loan status, and click Okay, we are now ready to run the module, and for this exercise we are running it one module at a time. We are doing that for better understanding of the flow as well as to correct any errors that we may encounter. However, you can simply do all the steps and run them at once. All right, you can visualise the data set to confirm whether it has the required number of rows or not. So let's go to the first node and visualise it. So as you can see, it has gotten 70% of the way to 614 rows. Let me close this one, and I'm sure the second node also has 30%. If you want to visualise it, you can simply right-click and visualise it. All right, let's now add our machine learning algorithm named logistic regression to our model. Let's search for logistics, and as you can see, there are two types of logistic regression modules. But as we are going to predict an outcome that's either yes or no, which means that we are only predicting two classes, we will use two-class logistic regression. Let's drag and drop it here. It does not have an input and rather needs to be provided to the training module. It also requires some parameters. We will look into each of these parameters in greater detail in the next lecture. Let's build it with default parameters for now. All right, the current model here is untrained, and we need to train it on our training data set. So let's search for training module which iscalled as Train model which is used fortraining a model using train data set. Let's drag and drop it here, and as you can see, it takes two inputs and produces an output. So let's connect the untrained two-class logistic regression and the training data set to the trained model. As you know, the Train data set is the first of these two nodes. However, it is giving us an error saying that a value is required. It is actually expecting the predicted feature—which in our case is loan status. So let's provide that to our train model. And how do we do that? We launch the column selector for that, select the loan status, and click OK. Now that the model is ready to be trained, we want to score the algorithm on our test data set and see how it performs. So the module that is required for the same is called a "score model," which is basically a validation model for our trained algorithm. So let's search for it and drag and drop it here. And as you can see, it requires two inputs. The first one is the train model, which is nothing but the output from the Trainmodel module and the test data set. So we provide the connection for our training model as well as the test data set. The test data set is coming from the second node of the Split module. Now let's right-click on it and then select it. It will run all the previous steps that have not been executed so far. Wow, it has run successfully. And let's visualise the output. Let's go all the way to the right. And as you can see, there are two additional columns called Scored Label and Scored Probabilities. And what do they mean? Well, a "score" label is the predicted value of that particular set of features or rows, and a "score probability" is the probability with which this particular value has been predicted. For example, the actual loan status for this row was Y, and our model has also predicted it as Y as part of the scored label with more than 80% scored probability. Similarly, for this row, it has correctly predicted the loan status as "no" with a very low probability. However, there are some records where the actual value was no, but our model predicted that as yes. These are the errors our model has committed, and that surely brings down the accuracy of our model. Let's try to evaluate and visualise these results in a bit more of a structured manner. Let me close this and go back to our experiment. For evaluation, we will use a module called Evaluate Results. Okay, so let me search for "Evaluate Model" and drag and drop it onto the canvas. As you can see, it makes use of these little rectangular flowchart ships called Modus for performing every step so that you don't have to write any code. In Python or R, you would have probably included a few libraries, written a few lines of code, and changed it every time you wanted to visualise the output or perform the operations for different types of data sets. In our case, we simply need to drag and drop those modules and change the parameters, which again are more like English. It takes two nodes, which are for evaluating different models. The second one is optional, so let's connect our scored model to the first node and run it. All right, it has run successfully, and let's visualise the output. Well, you will see some character as well as a few values. But one thing we can say is that this model has very high accuracy, and the AUC, which is the area under the curve, is also very high, which means that the model is a very good model. I know you must be wondering what these values and these labels mean. What is the Au Croc curve and how do I interpret this result? We will look into that in the new lecture by literally drawing these charts on our own. Before we understand the Evaluate results, let's first try to understand various parameters that we need to provide to the two-class law regression, which we'll do in the next lecture. So get yourself a cup of coffee, and I will see you in the next class. Thank you so much for joining me in this one.
In the last lecture. We built our model using two-class logistic regression and ran it successfully with very good accuracy. We saw some parameters there that an untrained model would expect. In fact, every algorithm has a set of input parameters. They are also known as hyper parameters.Hyperparameters are like knobs that data scientists use to get the optimum result. Alright, let's go through those parameters one by one. for logistic regression. The first one is the Create Trainer Mode option. By setting this option, you can specify how you want a model to be trained. A single parameter is when you know what the best combination of various parameters is. However, that usually does not happen. You may want to figure out the best possible combination—that is when the trainer mode of multiple parameters helps us. As you can see, if you choose this option, you will be able to enter multiple values and even a range. You should use the tune model hyperparameter when you are using multiple parameters. What it basically does is create various combinations of these values, run the model multiple times with each set, and provide us with the best results. We have a dedicated section on tuning model hyperparameters where we will cover this in greater detail. The next parameter is optimization tolerance. That's a big word. And as for Microsoft, it's a threshold value to stop the model iterations. So what does that mean? Well, as we have seen, this is the formula for logistic regression, and we say that all the values above 50% probability are positive and below that are negative. But how do we arrive at the best possible values of b zero and b one so that we have a minimum number of errors? Because if we arbitrarily set some values to be zero and one, we may get a plot like this. So we iterate and come up with multiple options such that the number of errors is minimized. We may assign a cost function, which essentially comes up with some sort of cost, and our aim is to minimise the same. All right, it basically tries to minimise the error. In the second iteration, we may get a curve like this. We go on minimising the errors, and our curve will start converging to the best possible results. But that creates another problem: When do we stop iterating? because it will never get an absolute minimum value. There will always be few decimal or avery minimal amount that it will always havea difference from the previous model. So we provide a threshold called optimization threshold that tells the model to stop when these curves are not optimised beyond that value. I hope that explains what optimization tolerance is. The next one is the memory size for LBFGS. Let's see what it is in the context of an optimization algorithm. The cost function of an optimization algorithm may look something like this, and this point is the one with the minimum cost or errors. Our model may be starting over here and trying to reach this point. It does so with the help of gradients and vectors. It has been explained in detail in a dedicated section on gradient descent. For now, let's simply know that it needs to understand its previous positions and related data in order to reach this point successfully and quickly. This parameter of Lbfgs, where the L stands for limited memory, simply provides the amount of memory to store the past information. All right. The smaller the memory, the less history will be stored. So it might reach here faster but be less accurate. Alright, let me explain the random number seat again. Let's say we have these ten numbers and a random function to randomly select three numbers from this set. Every time we run this random function, it will produce a different set when we train the model. We do not need different selections of data multiple times, as our model will produce different results every time we run it. So we simply provide this seed, which basically makes the random algorithm remember the logic it applied so that we get the same result if we rerun the experiment multiple times without changing other parameters. Finally, the "allow unknown categorical level option" creates an additional unknown level. Let's understand that with an example. Let's say we are trying to predict the income class from a data set and we have these values in our training set. But when we try to test the model, we find that we may have some additional classes as our model has not been trained on them. So what do we do? Well, this parameter creates unknown categorical levels, and any such new classes or categories that it finds are mapped to such an unknown categorical level. I hope that explains what is allowed. Unknown categorical level parameter Great. In case you're wondering why I have not talked about L1 and L2 regularization, well, it needs more time and hence I saved it for last. Now, before we understand L One and L Two, it is important to know what is meant by "regularization" in the context of overfeeding. Let's try to understand that using regulation. Let's say we have a data set with points plotted as shown here. A straight-line equation can clearly demonstrate that this is underfitting. Because most of the points are away from the line, it won't do any good for predicting any new data point. Similarly, in this situation, if we try to create an equation that makes the line pass through almost every datapoint, we will also be in trouble as it won't be good to predict a new data point. This is called the problem of overfitting. What we need is a trend that can be predicted with optimum accuracy, such as this. So what is causing this overfitting? Well, it is these coefficients whose larger values can have a significant impact on the outcome. So the b and other coefficients or weights are the reasons for the model to overfit. What if we could drastically reduce or even eliminate them by making it zero? That's what the one and two regularisation weights aim to achieve. They penalise the magnitude of the coefficient of features or weights, along with minimising the error between predicted and actual observations. Alright, the l2 regularisation performed by the ridge regression shrinks all the coefficients by the same proportions, but it does not eliminate any of them. Whereas the first regularization, or loss of regression, will shrink some of the coefficients to zero, So what I mean is that in this example, if we use higher values for l, it may even make b2 zero, preventing overfeeding. So the x or the feature associated with this coefficient might not even be selected. So what we are saying is it is also doing some sort of variable selection as well, and we will have more relevant features or variables in our equation. Great. As previously demonstrated, both L1 and L2 regularisation prevent overfitting by shrinking, or imposing a penalty on the coefficient. With l 2, you tend to end up with many small weights or coefficients, while with l 1, you tend to end up with larger weights but more zeros. So if you believe that it has too many features and some features are actually creating the problem of overfitting, try increasing the value of one. Right. for further details. and if you are interested in knowing the theory behind it. I recommend that you read a very good article on this website called analyticswiththe.com that explains what level one and level two regularisation are. I have shared the link here. Or else you can simply search for Ridgeand Lasso Analytics in Google, and the first link will be of interest. That concludes the lecture on two-class logistic regression parameters. I suggest you perform the same model using various parameters and assess the impact of it. We have a lecture on the same after we cover how to interpret the evaluation results of the logistic regression model. Evaluating the result is a very interesting topic, and we have lots of data for ease of understanding, metrics, and AUC along with various other indicators. I'm so eager to see you in the next class. Until then, have a great time.
Hello and welcome. So far we have covered a great deal. Of course, we have covered data transformation as well as the theory behind logistic regression and the parameters associated with it. We even created a model for loan application outcome prediction. In this lecture, we are going to cover how to interpret the results of an evaluation model. So let's go to our model in Azure ML Studio. So, here is our model that we created. We first imported the loan prediction data set, cleansome missing values, selected the columns that we wantedin our model prediction, split the data for trainingand testing, applied to class logistic regression and trainedit, scored it and finally ran an evaluate model. Let's now visualise the evaluated model. Our output appears something like this, so what does that mean and how do we interpret these results? So let's go back and look at the output of the score model and visualise it. Let's go all the way to the right, and as you can see in our test data set, there are actual positives and negative values. We can assume Y is a positive value and N is the negative value. And then, after we ran this core model, we had predicted results, which were also positive and negative, and in some cases, the actual and predicted results were different. All right, let's go and see what all these things mean in the form of an evaluation of the result. Let's try to plot the positives and negatives in the actual versus predicted metrics. So we have actual positives and negatives as well as predicted positives and negatives. All those positive cases where the actual results and predicted results match, we call them "true positives," so they are true and they are positives. Those results were predicted andactual both were negative. We call them "true negatives," so they were truly negative in the form of actual and predicted results where we predicted a positive outcome but in reality it was a negative outcome. That is, we predicted a Y but received an N. These were all the false positives. Similarly, we may have some results that were actually positive, but we predicted them as negatives, and these are false negatives. So let's try to understand these confusion metrics and other characteristics using an example in this graph. Let's say all the circles are positive results and all the triangles are negative results, and our model predicted these results as positives and these ones as negatives. In that case, our true positives are all eight circles within the blue line, and our false positive is this one triangle that was actually negative but we predicted as positive. Similarly, we can also calculate false negatives and true negatives. We also sum it up across predicted and actual results. All right, I'm sure how the confusionmetrics are constructed is now clear. Let's now learn how the characteristics of accuracy, precision, recall, and F-1 score are computed from the confusion metrics. Well, accuracy is nothing but the proportion of the total number of correct results. So the correct results are all true results.So in simple terms, it is true positive plus true negative divided by the total number of results or observations. So in this case, we have eight true positives and four true negatives. So the total number of observations is 15. So accuracy here will be eight plus four divided by 15. That is 0.8 percent, or 80%. Similarly, precision is the proportion of correct positive results out of all the predicted positive results. So that is nothing but true positives divided by all the predicted positives. How many were correctly identified as positives? That is a true positive. Hence we divide that by the total number of predicted positives. So here we predicted eight true positives, and our total predicted positive results are nine. So it is eight divided by nine, and hence the precision is 0.889 or 88.9%. The next result is a recall. Recall is the proportion of actual positive cases that the model has predicted correctly. Okay? So it is nothing but the true positives divided by the total actual positives. So in this case, we have a total of ten results that are positive, but we could only predict eight correct and two false. Hence in this case, the recall is eight divided by ten, that is, zero, 8%, or 80%. Okay, so we've looked at some math and a few ratios. The last one in this list is the F one score, which may require some explanation. Well, the F-score is the weighted average or harmonic mean of precision and recall. When measuring how well you are doing, it's often useful to have a single number to describe your performance. We could define that number to be the means of precision and recall. This is exactly what the F-one score is. The formula is as given here, and its value in this case will be zero point 84. The only reason why we use the harmonic mean is because we are taking the average of ratios or percentages, and in that case the harmonic mean is more appropriate than the arithmetic mean. Let me explain that with an example. In case you already know, you can skip it or stay with me for a revision of the concept. Let's say our model has identified these six positive results. Hence, our confusion metrics would look like this: When applying the formulas, precision is 100%. Because we only identified true positives, we didn't identify a single false positive, and the recall is only 60%. Now, if we want to compare it with the previous example based on a simple average of precision and recall, it may lead to a false interpretation, whereas the F1 score based on the formula we saw would give us a better value for predicting which model is performing better in comparison to precision and recall. All right, that brings us to the last topic in this lecture. That is AUC Roc, or area under the Roc curve. Well, it's nothing but plotting the true positive rates on the y axis and the false positive rates on the x axis. The area under this curve provides a single number that lets you compare models of different types. I know it's not very self-explanatory; let's understand it again with an example. But before we do that, let me take you back to history and explain why it is called ROC. You can pause the video if you want and read why it is called ROC. Okay, let's try to understand how the AUC has been plotted so that you can appreciate its value in comparing two different models. For this purpose, I have another model created using bank data. The bank wanted to predict whether a particular prospect would buy the product or not. When we look at this result, this is what we see. I have this sheet that you can download and try for your model to create your own AUCROC curve. There we are. So what I have done is basically change the value of threshold starting from 0.5 until 0.9%, and the resultant true positives, false negatives, and true negatives have been captured. The true positive rate is true positives divided by actual positives, and the false positive rate is false positives divided by actual negatives. That gives me some twelve data points for false positive rates and true positive rates for a given threshold value. And when I plot them, this is what I get. You can build your own models and try it for yourself, using various values of the threshold, to create your AUC. You can think of the area under this curve as representing the probability that a particular model will rank a randomly chosen positive observation higher than a randomly chosen negative observation. This is a huge benefit of using an ROC curve to evaluate the model because an ROC curve visualises all possible thresholds. And in case you're wondering how to set the threshold, well, it's more of a business decision. Let's say your model is used to predict whether a given credit card transaction is fraud or not. The business decision might be to set the threshold very low. That may result in a lot of false positives, but that might be considered acceptable because it would maximise the true positive rate and thus minimise the number of actual frauds that were not flagged. Alright, we covered a lot of theory here. I have tried to make it as easy as possible for you to understand. That concludes our lecture on understanding the results of a model, particularly the classification type of models. In the next lecture, we will try to analyse the impact of various parameters on the outcome and the variation that is possible when we use them correctly. So I'll see you in the next class. And until then, enjoy your time.
Hello and welcome. In today's lecture related to classification, we are going to analyse the impact of various parameters on the experiment outcome. So I have listed some six scenarios here, but out of these six, we are going to cover only the second one, which is analysing the impact of stratification. What I suggest is that you should actually run your modules with different parameters as well as try to do a different split percentage. And now that you know what the impact of L one and L two is, I suggest you run your experiments with different values of L one and L two and see for yourself the impact of those on various parameters such as accuracy, precision, recall, and AUC. So let's go through the impact of stratification on our model. So let's go to the Azure ML Studio and open our loan prediction model. So here we are. And this is our loan prediction model. I'm simply going to rearrange a few things, but before that, we need to evaluate the results of our model. So this was the result, which basically means 83% accuracy, 4 precision, and an AUC of zero. Two.Two. Let me close this. All right. So because we want to analyse the impact of stratification, which we did at the split data module, So what we are going to do is copy all the steps from split data all the way up to evaluate more. So let me copy the split data and paste it o let me copyProvide the right connection from here to here, copy it to class logistic regression, and paste it here. And we're also going to copy the train and score model. I'm going to copy them at once. You can also do that for other modules as well. So let this go here, and let this one go here. And all our connections are more or less ready now, but we want to analyse the impact of certification. So we'll mark certified split as false here. All right. And now we are almost ready to run. I'm going to distribute 1232 class logistics split data at random. We have marked the certified split as false. Now, because we want to evaluate these two models separately, I'm going to connect the score model to the same evaluation odels separWe then give the test data to the score model, and we are now ready to run this. So let's run the evaluate model so that it runs all the previous steps and then evaluate the results. All right, our new experiment has also run successfully. And let's visualise the output. So the blue line is the scored data set, and the red one is the data set that we have given for it to compare. So let's go down and check the results. We already know the outcomes. Let's go to the score data set to compare. And as you can see, the accuracy has gone down without certification just as the AUC has gone down without certification. I hope the impact of certification or creating a balanced data set when we are splitting the original data set into train and test is now clear. As you can see over here, both the number of true positives and the number of true positives over here have gone down significantly. The number of false positives has also increased, so that has also had an impact on our accuracy, on our precision recall, as well as on the F1 score. So as you can see, for the same threshold value, the AUC, or the area under the curve, for the stratified model is zero 82, whereas the AUC for the non-stratified one is zero 79 one.That's actually a little over 3% gain, but that could mean a lot in a real-life scenario. I hope I have been able to explain to you the impact of certification on the outcome of your experiment. Thank you so much for joining me in this class, and I will see you in the next one. Until then, enjoy your time.
ExamCollection provides the complete prep materials in vce files format which include Microsoft Certified: Azure Data Scientist Associate certification exam dumps, practice test questions and answers, video training course and study guide which help the exam candidates to pass the exams quickly. Fast updates to Microsoft Certified: Azure Data Scientist Associate certification exam dumps, practice test questions and accurate answers vce verified by industry experts are taken from the latest pool of questions.
Microsoft Microsoft Certified: Azure Data Scientist Associate Video Courses
Top Microsoft Certification Exams
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from email@example.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.