XLSTAT is the leading data analysis and statistical solution for Microsoft Excel. The XLSTAT add-in offers a wide variety of functions to enhance the analytical capabilities of Excel, making it the ideal tool for your everyday data analysis and statistics requirements.
XLSTAT runs on all Excel versions from version 5.0 to version 2003.
If you want to analyze your data without having to shift it from one application to another If you want to apply your skills in statistics to data in Excel format If you want to boost Excel functionalities but keep the program easy to use If you want simple, functional software to help train students in statistics and data analysis If you want to save time and money
How can I use XLSTAT to run an ANCOVA? An Excel sheet with both the data and results can be downloaded by clicking here. The data have been obtained in Lewis T. and Taylor L.R. (1967). Introduction to Experimental Ecology, New York: Academic Press, Inc.. They concern 237 children, described by their Gender, Age in months, Height in inches (1 inch = 2.54 cm), and Weight in pounds (1 pound = 0.45 kg). Using the Analysis of Covariance (ANCOVA), we want to find out how the weight of the children varies with their gender (a qualitative variable that takes value f or m), their height and their age, and to verify if a linear model makes sense. The ANCOVA method belongs to a larger family of models called GLM (Generalized Linear Models) as do the linear regression and the ANOVA. The specificity of ANCOVA is that it mixes qualitative and quantitative explanatory variables. In two other turorials on linear regression this dataset is also used, with the Height and then the Height and the Age as explanatory variables. After opening XLSTAT, select the XLSTAT/Modeling data/ANCOVA command, or click on the corresponding button of the "Modeling Data" toolbar (see below). Once you've clicked on the button, the ANCOVA dialog box appears. Select the data on the Excel sheet. The "Dependant variable" (or variable to model) is here the Weight. The quantitative explanatory variables are the height and the age. The qualitative variable is the gender. As we selected the column title for the variables, we leave the option "Column labels" activated. The Type I, III SS option is activacted to allow us to analyze the relative weights of the variables in the model (SS stands for sum of squares). We leave the "Residuals" option activated as well, to find out if the data comply with the normality assumptions, and to identify potential outliers. The computations begin once you have clicked on "OK". The results will then be displayed. The first table displays the goodness of fit coefficients of the model. The R² (coefficient of determination) indicates the % of variability of the dependant variable which is explained by the explanatory variables. The closer to 1 the R² is, the better the fit. In this particular case, 63 % of the variability of the Weight is explained by the Height, the Age and the Gender. The remainder of the variability is due to some effects (other explanatory variables) that have not been or that could not be mesured during this experiment. We can guess that some genetic and nutritive effects are involved, but it might be that simply by transforming the available variables we could obtain some better results. It is important to examine the results of the analysis of variance table (see below). The results enable us to determine whether or not the explanatory variables bring significant information (null hypothesis H0) to the model. In other words, it's a way of asking yourself whether it is valid to use the mean to describe the whole population, or whether the information brought by the explanatory variables is of value or not. The Fisher's F test is used. Given the fact that the probability corresponding to the F value is lower than 0.0001, it means that we would be taking a lower than 0.01% risk in assuming that the null hypothesis (no effect of the two explanatory variables) is wrong. Therefore, we can conclude with confidence that the three variables do bring a significant amount of information. We also want to find out if the three variables provide the same amount of information. To do this, we have to examine the Type I SS and Type III SS tables (see below). The Type I SS table is constructed by adding variables in the model one by one, and by evaluating the impact of each on the model sum of squares (Model SS). In consequence, in Type I SS, the order in which the variables are selected will influence the results. The lower the F probability corresponding to a given variable, the stronger the impact of the variable on the model as it is before the variable is added to it. We can see here that the Gender bring only little information to the model, once the Height and the Age have been added. The Type III SS table is computed by removing one variable of the model at a time to evaluate its impact on the quality of the model. This means that the order in which the variables are selected will not have any effect on the values in the Type III SS. The Type III SS is generally the best method to use to interpret results when an interaction is part of the model. The lower the F probability corresponding to a given variable, the stronger the impact of the variable on the model. We can see that the gender brings the least information to the model. The following table gives details on the model. This table is helpful when predictions are needed, or when you need to compare the coefficients of the model for a given population with the ones obtained for another population. We can see that the p-value for the Gender parameter is 0.83, and that the corresponding confidence range includes 0. This confirms the week impact of the Gender on the model. If we look at the parameter corresponding to Gender-m, it seems that for a given age and height, being a boy means a small decrease of the weight. The next table shows the residuals. It enables us to take a closer look at each of the standardized residuals. These residuals, given the assumptions of the linear regression model, should be normally distributed, meaning that 95% of the residuals should be in the interval [-1.96, 1.96]. All values outside this interval are potential outliers, or might suggest that the normality assumption is wrong. We used XLSTAT's DataFlagger to bring out the residuals that are not in the [-1.96, 1.96] interval. We can identify 16 suspicious residuals out of 237, that is to say 6% instead of 5%, an analysis that could lead to reject the hypothesis of normality. A more in depth analysis of the residuals has been performed in a tutorial on distribution fitting. The first chart (see below) allows us to visualize the standardized residuals versus the Weight. It indicates that the residuals grow with the Weight. The histogram of the residuals enables us to quickly visualize the residuals that are out of the range [-2, 2]. As a conclusion, the Height, the Age and the Gender allow us to explain 63% of the variability of the Weight. A significant amount of information is not explained by the ANCOVA model we have used. Further analyses would be necessary.
How do I run Preference Mapping with XLSTAT-MX? The following example shows how one can create a preference map using the PREFMAP method. An Excel sheet with both the data and the results can be downloaded by clicking here. The data used here are composed by:
The consumer acceptability data: 99 consumers rated 10 different commercial samples of potato crisps. These data have been obtained from the article by Schlich and McEwan (1992). The ratings have been discritized from 1 to 30 (30 corresponds to the highest acceptability). These data are stored in a 99 x 10 table.
The average ratings computed from the ratings given by 8 experts to the 10 crisps samples for 4 texture attributes and 7 flavor attributes. These data, simulated for the purpose of teaching by the author of this tutorial on the basis of the article by Schlich and McEwan (1992), make up a 10 x 11 table. Step 1: Creating the sensory map We first create the sensory map by applying a PCA on the 10 x 11 table. This gives us a two-dimensional visualization of the crisps. As a tutorial is dedicated to PCA, we do not elaborate on that subject here. The dialog box of the PCA has been filled in as shown below. The display options have been set as follows: The map obtained, which quality is fine (69.3% of the variability is displayed), allows us to notice that the products are well differentiated by the experts. The criteria seem to be little redundant given their dispersion on the correlations circle. Step 2. Grouping the consumers We now focus on the ratings given by the 99 consumers. As the number of consumers is significant, we decided to group them into homogeneous groups in order to make the PREFMAP results easier to interpret. We chose the Agglomerative Hierarchical Clustering (AHC). As a tutorial is dedicated to AHC, we do not elaborate on that subject here. The dialog box of the AHC has been filled in as shown below. The "Center / Reduce" option has been left activated in order to diminish the differences between the judgment scales of the consumers. By looking at the dendrogram, it makes sense to decide to work with 8 groups (we truncate the dendogram at the level 32). We then re-run the AHC while specifying that we request 8 clusters. The dislog box has been filled in as displayed below. We then store the centroids of the clusters for the last step of the analysis. The table is copied and paste (Edit / Paste special with Transposed option) in a new sheet named "Clusters' pref.". Step 3: Creating the preference map using the PREFMAP method In this section we apply the PREFMAP method, using the coordinates of the crisps in the two-dimensional factor space and the ratings given by the consumers summarized by the standardized ratings for the 8 clusters. To activate the Preference Mapping dialog box, start XLSTAT, then select the XLSTAT/XLSTAT-MX/Preference Mapping command, or click on the corresponding button of the XLSTAT-MX toolbar (see below). When you click on the button, the Preference Mapping dialog box will appear. Select the data on the Excel sheet. To the "Attributes (Y)" correspond the ratings of the 8 clusters. The "Configuration (X)" corresponds to the factor scores of the crisps obtained from the PCA. We decided to use the vector model. At the end of the treatments, a second The computations begin once you have clicked on the "OK" button. They stop to let you choose the options for the Preference map. We selected the R² (coefficient of determination) to determine the length of the vectors on the preference map. The results we obtain (see below) show that the vector model is well fitted for the clusters 1, 3 and 6. For the other clusters it will be more risky to interpret the results. The preference map allows to quickly interpret the results. When looking at both the map and the correlations circle, we see that the consumers of the cluster 1prefer crisps that are greasy and melting, not crispy to the contrary of the Cluster6. Consumers of the cluster 3 prefer crisps that are salty and sweet, not sticky and melting. The preference orders for the various groups of consumers are displayed. We notice that the crisp 4, characterized by an earthy only litle sweet and salty taste, is prefered by the clusters 7 and 8, while it is the most inaccepted by the clusters 3, 4 and 5. The marketing and R&D teams will be able to take this information into account to direct their creation of new crisps towards the right directions.