Introduction to lessR

library(lessR)
#> 
#> lessR 4.4.0                         feedback: gerbing@pdx.edu 
#> --------------------------------------------------------------
#> > d <- Read("")   Read text, Excel, SPSS, SAS, or R data file
#>   d is default data frame, data= in analysis routines optional
#> 
#> Many examples of reading, writing, and manipulating data, 
#> graphics, testing means and proportions, regression, factor analysis,
#> customization, forecasting, and aggregation from pivot tables
#>   Enter: browseVignettes("lessR")
#> 
#> View lessR updates, now including time series forecasting
#>   Enter: news(package="lessR")
#> 
#> Interactive data analysis
#>   Enter: interact()
#> 
#> Attaching package: 'lessR'
#> The following object is masked from 'package:base':
#> 
#>     sort_by

The vignette examples of using lessR became so extensive that the maximum R package installation size was exceeded. Find a limited number of examples below. Find many more vignette examples at:

lessR examples

Read Data

more examples of reading and writing data files

Many of the following examples analyze data in the Employee data set, included with lessR. To read an internal lessR data set, just pass the name of the data set to the lessR function Read(). Read the Employee data into the data frame d. For data sets other than those provided by lessR, enter the path name or URL between the quotes, or leave the quotes empty to browse for the data file on your computer system. See the Read and Write vignette for more details.

d <- Read("Employee")
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> integer: Numeric data values, integers only
#> double: Numeric data values with decimal digits
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1     Years   integer     36       1      16   7  NA  7 ... 1  2  10
#>  2    Gender character     37       0       2   M  M  W ... W  W  M
#>  3      Dept character     36       1       5   ADMN  SALE  FINC ... MKTG  SALE  FINC
#>  4    Salary    double     37       0      37   53788.26  94494.58 ... 56508.32  57562.36
#>  5    JobSat character     35       2       3   med  low  high ... high  low  high
#>  6      Plan   integer     37       0       3   1  1  2 ... 2  2  1
#>  7       Pre   integer     37       0      27   82  62  90 ... 83  59  80
#>  8      Post   integer     37       0      22   92  74  86 ... 90  71  87
#> ------------------------------------------------------------------------------------------

d is the default name of the data frame for the lessR data analysis functions. Explicitly access the data frame with the data parameter in the analysis functions.

As an option, also read the table of variable labels. Create the table formatted as two columns. The first column is the variable name and the second column is the corresponding variable label. Not all variables need be entered into the table. The table can be a csv file or an Excel file.

Read the file of variable labels into the l data frame, currently the only permitted name. The labels will be displayed on both the text and visualization output. Each displayed label is the variable name juxtaposed with the corresponding label, as shown in the following output.

l <- rd("Employee_lbl")
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1     label character      8       0       8   Time of Company Employment ... Test score on legal issues after instruction
#> ------------------------------------------------------------------------------------------
l
#>                                                label
#> Years                     Time of Company Employment
#> Gender                                  Man or Woman
#> Dept                             Department Employed
#> Salary                           Annual Salary (USD)
#> JobSat            Satisfaction with Work Environment
#> Plan             1=GoodHealth, 2=GetWell, 3=BestCare
#> Pre    Test score on legal issues before instruction
#> Post    Test score on legal issues after instruction

Bar Chart

more examples of bar charts and pie charts

Consider the categorical variable Dept in the Employee data table. Use BarChart() to tabulate and display the visualization of the number of employees in each department, here relying upon the default data frame (table) named d. Otherwise add the data= option for a data frame with another name.

BarChart(Dept)
Bar chart of tablulated counts of employees in each department.

Bar chart of tablulated counts of employees in each department.

#> >>> Suggestions
#> BarChart(Dept, horiz=TRUE)  # horizontal bar chart
#> BarChart(Dept, fill="reds")  # red bars of varying lightness
#> PieChart(Dept)  # doughnut (ring) chart
#> Plot(Dept)  # bubble plot
#> Plot(Dept, stat="count")  # lollipop plot 
#> 
#> --- Dept --- 
#> 
#> Missing Values: 1 
#> 
#>                 ACCT   ADMN   FINC   MKTG   SALE    Total 
#> Frequencies:       5      6      4      6     15       36 
#> Proportions:   0.139  0.167  0.111  0.167  0.417    1.000 
#> 
#> Chi-squared test of null hypothesis of equal probabilities 
#>   Chisq = 10.944, df = 4, p-value = 0.027

Specify a single fill color with the fill parameter, the edge color of the bars with color. Set the transparency level with transparency. Against a lighter background, display the value for each bar with a darker color using the labels_color parameter. To specify a color, use color names, specify a color with either its rgb() or hcl() color space coordinates, or use the lessR custom color palette function getColors().

BarChart(Dept, fill="darkred", color="black", transparency=.8,
         labels_color="black")

#> >>> Suggestions
#> BarChart(Dept, horiz=TRUE)  # horizontal bar chart
#> BarChart(Dept, fill="reds")  # red bars of varying lightness
#> PieChart(Dept)  # doughnut (ring) chart
#> Plot(Dept)  # bubble plot
#> Plot(Dept, stat="count")  # lollipop plot 
#> 
#> --- Dept --- 
#> 
#> Missing Values: 1 
#> 
#>                 ACCT   ADMN   FINC   MKTG   SALE    Total 
#> Frequencies:       5      6      4      6     15       36 
#> Proportions:   0.139  0.167  0.111  0.167  0.417    1.000 
#> 
#> Chi-squared test of null hypothesis of equal probabilities 
#>   Chisq = 10.944, df = 4, p-value = 0.027

Use the theme parameter to change the entire color theme: “colors”, “lightbronze”, “dodgerblue”, “slatered”, “darkred”, “gray”, “gold”, “darkgreen”, “blue”, “red”, “rose”, “green”, “purple”, “sienna”, “brown”, “orange”, “white”, and “light”. In this example, changing the full theme accomplishes the same as changing the fill color. Turn off the displayed value on each bar with the parameter labels set to off. Specify a horizontal bar chart with base R parameter horiz.

BarChart(Dept, theme="gray", labels="off", horiz=TRUE)

#> >>> Suggestions
#> BarChart(Dept, horiz=TRUE)  # horizontal bar chart
#> BarChart(Dept, fill="reds")  # red bars of varying lightness
#> PieChart(Dept)  # doughnut (ring) chart
#> Plot(Dept)  # bubble plot
#> Plot(Dept, stat="count")  # lollipop plot 
#> 
#> --- Dept --- 
#> 
#> Missing Values: 1 
#> 
#>                 ACCT   ADMN   FINC   MKTG   SALE    Total 
#> Frequencies:       5      6      4      6     15       36 
#> Proportions:   0.139  0.167  0.111  0.167  0.417    1.000 
#> 
#> Chi-squared test of null hypothesis of equal probabilities 
#>   Chisq = 10.944, df = 4, p-value = 0.027

Histogram

more examples of histograms

Consider the continuous variable Salary in the Employee data table. Use Histogram() to tabulate and display the number of employees in each department, here relying upon the default data frame (table) named d, so the data= parameter is not needed.

Histogram(Salary)
Histogram of tablulated counts for the bins of Salary.

Histogram of tablulated counts for the bins of Salary.

#> >>> Suggestions 
#> bin_width: set the width of each bin 
#> bin_start: set the start of the first bin 
#> bin_end: set the end of the last bin 
#> Histogram(Salary, density=TRUE)  # smoothed curve + histogram 
#> Plot(Salary)  # Violin/Box/Scatterplot (VBS) plot 
#> 
#> --- Salary --- 
#>  
#>      n   miss         mean           sd          min          mdn          max 
#>      37      0    73795.557    21799.533    46124.970    69547.600   134419.230 
#>  
#> 
#>   
#> --- Outliers ---     from the box plot: 1 
#>  
#> Small      Large 
#> -----      ----- 
#>             134419.2 
#> 
#> 
#> Bin Width: 10000 
#> Number of Bins: 10 
#>  
#>              Bin  Midpnt  Count    Prop  Cumul.c  Cumul.p 
#> --------------------------------------------------------- 
#>   40000 >  50000   45000      4    0.11        4     0.11 
#>   50000 >  60000   55000      8    0.22       12     0.32 
#>   60000 >  70000   65000      8    0.22       20     0.54 
#>   70000 >  80000   75000      5    0.14       25     0.68 
#>   80000 >  90000   85000      3    0.08       28     0.76 
#>   90000 > 100000   95000      5    0.14       33     0.89 
#>  100000 > 110000  105000      1    0.03       34     0.92 
#>  110000 > 120000  115000      1    0.03       35     0.95 
#>  120000 > 130000  125000      1    0.03       36     0.97 
#>  130000 > 140000  135000      1    0.03       37     1.00 
#> 

By default, the Histogram() function provides a color theme according to the current, active theme. The function also provides the corresponding frequency distribution, summary statistics, the table that lists the count of each category, from which the histogram is constructed, as well as an outlier analysis based on Tukey’s outlier detection rules for box plots.

Use the parameters bin_start, bin_width, and bin_end to customize the histogram.

Easy to change the color, either by changing the color theme with style(), or just change the fill color with fill. Can refer to standard R colors, as shown with lessR function showColors(), or implicitly invoke the lessR color palette generating function getColors(). Each 30 degrees of the color wheel is named, such as "greens", "rusts", etc, and implements a sequential color palette.

Histogram(Salary, bin_start=35000, bin_width=14000, fill="reds")
Customized histogram.

Customized histogram.

#> >>> Suggestions 
#> bin_end: set the end of the last bin 
#> Histogram(Salary, density=TRUE)  # smoothed curve + histogram 
#> Plot(Salary)  # Violin/Box/Scatterplot (VBS) plot 
#> 
#> --- Salary --- 
#>  
#>      n   miss         mean           sd          min          mdn          max 
#>      37      0    73795.557    21799.533    46124.970    69547.600   134419.230 
#>  
#> 
#>   
#> --- Outliers ---     from the box plot: 1 
#>  
#> Small      Large 
#> -----      ----- 
#>             134419.2 
#> 
#> 
#> Bin Width: 14000 
#> Number of Bins: 8 
#>  
#>              Bin  Midpnt  Count    Prop  Cumul.c  Cumul.p 
#> --------------------------------------------------------- 
#>   35000 >  49000   42000      1    0.03        1     0.03 
#>   49000 >  63000   56000     14    0.38       15     0.41 
#>   63000 >  77000   70000      9    0.24       24     0.65 
#>   77000 >  91000   84000      4    0.11       28     0.76 
#>   91000 > 105000   98000      5    0.14       33     0.89 
#>  105000 > 119000  112000      2    0.05       35     0.95 
#>  119000 > 133000  126000      1    0.03       36     0.97 
#>  133000 > 147000  140000      1    0.03       37     1.00 
#> 

Scatterplot

more examples of scatter plots and related

Specify an X and Y variable with the plot function to obtain a scatter plot. For two variables, both variables can be any combination of continuous or categorical. One variable can also be specified. A scatterplot of two categorical variables yields a bubble plot. Below is a scatterplot of two continuous variables.

Plot(Years, Salary)

#> 
#> >>> Suggestions  or  enter: style(suggest=FALSE)
#> Plot(Years, Salary, enhance=TRUE)  # many options
#> Plot(Years, Salary, fill="skyblue")  # interior fill color of points
#> Plot(Years, Salary, fit="lm", fit_se=c(.90,.99))  # fit line, stnd errors
#> Plot(Years, Salary, MD_cut=6)  # Mahalanobis distance from center > 6 is an outlier 
#> 
#> 
#> >>> Pearson's product-moment correlation 
#>  
#> Years: Time of Company Employment 
#> Salary: Annual Salary (USD) 
#>  
#> Number of paired values with neither missing, n = 36 
#> Sample Correlation of Years and Salary: r = 0.852 
#>   
#> Hypothesis Test of 0 Correlation:  t = 9.501,  df = 34,  p-value = 0.000 
#> 95% Confidence Interval for Correlation:  0.727 to 0.923 
#> 

Enhance the default scatterplot with parameter enhance. The visualization includes the mean of each variable indicated by the respective line through the scatterplot, the 95% confidence ellipse, labeled outliers, least-squares regression line with 95% confidence interval, and the corresponding regression line with the outliers removed.

Plot(Years, Salary, enhance=TRUE)
#> [Ellipse with Murdoch and Chow's function ellipse from their ellipse package]

#> 
#> 
#> >>> Suggestions  or  enter: style(suggest=FALSE)
#> Plot(Years, Salary, color="red")  # exterior edge color of points
#> Plot(Years, Salary, fit="lm", fit_se=c(.90,.99))  # fit line, stnd errors
#> Plot(Years, Salary, out_cut=.10)  # label top 10% from center as outliers 
#> 
#> >>> Outlier analysis with Mahalanobis Distance 
#>  
#>   MD  ID 
#> ----- ----- 
#> 8.14  18 
#> 7.84  34 
#>  
#> 5.63  31 
#> 5.58  19 
#> 3.75   4 
#> ...  ... 
#> 
#> 
#> >>> Pearson's product-moment correlation 
#>  
#> Years: Time of Company Employment 
#> Salary: Annual Salary (USD) 
#>  
#> Number of paired values with neither missing, n = 36 
#> Sample Correlation of Years and Salary: r = 0.852 
#>   
#> Hypothesis Test of 0 Correlation:  t = 9.501,  df = 34,  p-value = 0.000 
#> 95% Confidence Interval for Correlation:  0.727 to 0.923 
#> 

The default plot for a single continuous variable includes not only the scatterplot, but also the superimposed violin plot and box plot, with outliers identified. Call this plot the VBS plot.

Plot(Salary)
#> [Violin/Box/Scatterplot graphics from Deepayan Sarkar's lattice package]
#> 
#> >>> Suggestions
#> Plot(Salary, out_cut=2, fences=TRUE, vbs_mean=TRUE)  # Label two outliers ...
#> Plot(Salary, box_adj=TRUE)  # Adjust boxplot whiskers for asymmetry

#> --- Salary --- 
#> Present: 37 
#> Missing: 0 
#> Total  : 37 
#>  
#> Mean         : 73795.557 
#> Stnd Dev     : 21799.533 
#> IQR          : 31012.560 
#> Skew         : 0.190   [medcouple, -1 to 1] 
#>  
#> Minimum      : 46124.970 
#> Lower Whisker: 46124.970 
#> 1st Quartile : 56772.950 
#> Median       : 69547.600 
#> 3rd Quartile : 87785.510 
#> Upper Whisker: 122563.380 
#> Maximum      : 134419.230 
#> 
#>   
#> --- Outliers ---     from the box plot: 1 
#>  
#> Small      Large 
#> -----      ----- 
#>             134419.23 
#> 
#> Number of duplicated values: 0 
#> 
#> Parameter values (can be manually set) 
#> ------------------------------------------------------- 
#> size: 0.61      size of plotted points 
#> out_size: 0.82  size of plotted outlier points 
#> jitter_y: 0.45 random vertical movement of points 
#> jitter_x: 0.00  random horizontal movement of points 
#> bw: 9529.04       set bandwidth higher for smoother edges

Following is a scatterplot in the form of a bubble plot for two categorical variables.

Plot(JobSat, Gender)

#> 
#> >>> Suggestions or enter: style(suggest=FALSE)
#> Plot(JobSat, Gender, size_cut=FALSE) 
#> Plot(JobSat, Gender, trans=.8, bg="off", grid="off") 
#> SummaryStats(JobSat, Gender)  # or ss 
#> 
#> Joint and Marginal Frequencies 
#> ------------------------------ 
#>  
#>      JobSat 
#> Gender   high low med Sum 
#>   M         3  11   4  18 
#>   W         8   2   7  17 
#>   Sum      11  13  11  35 
#> 
#> Cramer's V: 0.515 
#>  
#> Chi-square Test of Independence:
#>      Chisq = 9.301, df = 2, p-value = 0.010 
#> 
#> Some Parameter values (can be manually set) 
#> ------------------------------------------------------- 
#> radius: 0.22    size of largest bubble 
#> power: 0.50     relative bubble sizes

Means and Proportions

Means

more examples of t-tests and ANOVA

For the independent-groups t-test, specify the response variable to the left of the tilde, ~, and the categorical variable with two groups, the grouping variable, to the right of the tilde.

ttest(Salary ~ Gender)
#> 
#> Compare Salary across Gender with levels M and W 
#> Grouping Variable:  Gender, Man or Woman
#> Response Variable:  Salary, Annual Salary (USD)
#> 
#> 
#> ------ Describe ------
#> 
#> Salary for Gender M:  n.miss = 0,  n = 18,  mean = 81147.458,  sd = 23128.436
#> Salary for Gender W:  n.miss = 0,  n = 19,  mean = 66830.598,  sd = 18438.456
#> 
#> Mean Difference of Salary:  14316.860
#> 
#> Weighted Average Standard Deviation:   20848.636 
#> 
#> 
#> ------ Assumptions ------
#> 
#> Note: These hypothesis tests can perform poorly, and the 
#>       t-test is typically robust to violations of assumptions. 
#>       Use as heuristic guides instead of interpreting literally. 
#> 
#> Null hypothesis, for each group, is a normal distribution of Salary.
#> Group M  Shapiro-Wilk normality test:  W = 0.962,  p-value = 0.647
#> Group W  Shapiro-Wilk normality test:  W = 0.828,  p-value = 0.003
#> 
#> Null hypothesis is equal variances of Salary, homogeneous.
#> Variance Ratio test:  F = 534924536.348/339976675.129 = 1.573,  df = 17;18,  p-value = 0.349
#> Levene's test, Brown-Forsythe:  t = 1.302,  df = 35,  p-value = 0.201
#> 
#> 
#> ------ Infer ------
#> 
#> --- Assume equal population variances of Salary for each Gender 
#> 
#> t-cutoff for 95% range of variation: tcut =  2.030 
#> Standard Error of Mean Difference: SE =  6857.494 
#> 
#> Hypothesis Test of 0 Mean Diff:  t-value = 2.088,  df = 35,  p-value = 0.044
#> 
#> Margin of Error for 95% Confidence Level:  13921.454
#> 95% Confidence Interval for Mean Difference:  395.406 to 28238.314
#> 
#> 
#> --- Do not assume equal population variances of Salary for each Gender 
#> 
#> t-cutoff: tcut =  2.036 
#> Standard Error of Mean Difference: SE =  6900.112 
#> 
#> Hypothesis Test of 0 Mean Diff:  t = 2.075,  df = 32.505, p-value = 0.046
#> 
#> Margin of Error for 95% Confidence Level:  14046.505
#> 95% Confidence Interval for Mean Difference:  270.355 to 28363.365
#> 
#> 
#> ------ Effect Size ------
#> 
#> --- Assume equal population variances of Salary for each Gender 
#> 
#> Standardized Mean Difference of Salary, Cohen's d:  0.687
#> 
#> 
#> ------ Practical Importance ------
#> 
#> Minimum Mean Difference of practical importance: mmd
#> Minimum Standardized Mean Difference of practical importance: msmd
#> Neither value specified, so no analysis
#> 
#> 
#> ------ Graphics Smoothing Parameter ------
#> 
#> Density bandwidth for Gender M: 14777.329
#> Density bandwidth for Gender W: 11630.959

Next, to analyze the operational efficiency of a weeping device, do the two way independent groups ANOVA analyzing the variable breaks across levels of tension and wool. Specify the second independent variable preceded by a * sign.

ANOVA(breaks ~ tension * wool, data=warpbreaks)

#> 
#>   BACKGROUND 
#> 
#> Response Variable: breaks 
#>  
#> Factor Variable 1: tension 
#>   Levels: L M H 
#>  
#> Factor Variable 2: wool 
#>   Levels: A B 
#>  
#> Number of cases (rows) of data:  54 
#> Number of cases retained for analysis:  54 
#>  
#> Two-way Between Groups ANOVA 
#> 
#> 
#>   DESCRIPTIVE STATISTICS  
#> 
#>  
#> Equal cell sizes, so balanced design 
#>  
#>                       
#>       tension 
#>  wool    L    M    H 
#>     A    9    9    9 
#>     B    9    9    9 
#> 
#>                          
#>       tension 
#>  wool     L     M     H 
#>     A 44.56 24.00 24.56 
#>     B 28.22 28.78 18.78 
#> 
#> tension 
#>                       
#>         L     M     H 
#>   1 36.39 26.39 21.67 
#>  
#> wool 
#>                 
#>         A     B 
#>   1 31.04 25.26 
#> 
#> NA 
#>  
#> 
#>                         
#>       tension 
#>  wool     L    M     H 
#>     A 18.10 8.66 10.27 
#>     B  9.86 9.43  4.89 
#> 
#> 
#>   ANOVA 
#> 
#>  
#>              df    Sum Sq   Mean Sq   F-value   p-value 
#>      tension  2   2034.26   1017.13      8.50    0.0007 
#>         wool  1    450.67    450.67      3.77    0.0582 
#> tension:wool  2   1002.78    501.39      4.19    0.0210 
#>    Residuals 48   5745.11    119.69 
#> 
#> Partial Omega Squared for tension: 0.217 
#> Partial Omega Squared for wool: 0.049 
#> Partial Omega Squared for tension & wool: 0.106 
#>  
#> Cohen's f for tension: 0.527 
#> Cohen's f for wool: 0.226 
#> Cohen's f for tension_&_wool: 0.344 
#> 
#> 
#>  TUKEY MULTIPLE COMPARISONS OF MEANS 
#> 
#> Family-wise Confidence Level: 0.95 
#> 
#> Factor: tension 
#> ------------------------------- 
#>         diff    lwr   upr p adj 
#>   M-L -10.00 -18.82 -1.18  0.02 
#>   H-L -14.72 -23.54 -5.90  0.00 
#>   H-M  -4.72 -13.54  4.10  0.40 
#> 
#> Factor: wool 
#> ----------------------------- 
#>        diff    lwr  upr p adj 
#>   B-A -5.78 -11.76 0.21  0.06 
#> 
#> Cell Means 
#> ------------------------------------ 
#>             diff    lwr    upr p adj 
#>   M:A-L:A -20.56 -35.86  -5.25  0.00 
#>   H:A-L:A -20.00 -35.31  -4.69  0.00 
#>   L:B-L:A -16.33 -31.64  -1.03  0.03 
#>   M:B-L:A -15.78 -31.08  -0.47  0.04 
#>   H:B-L:A -25.78 -41.08 -10.47  0.00 
#>   H:A-M:A   0.56 -14.75  15.86  1.00 
#>   L:B-M:A   4.22 -11.08  19.53  0.96 
#>   M:B-M:A   4.78 -10.53  20.08  0.94 
#>   H:B-M:A  -5.22 -20.53  10.08  0.91 
#>   L:B-H:A   3.67 -11.64  18.97  0.98 
#>   M:B-H:A   4.22 -11.08  19.53  0.96 
#>   H:B-H:A  -5.78 -21.08   9.53  0.87 
#>   M:B-L:B   0.56 -14.75  15.86  1.00 
#>   H:B-L:B  -9.44 -24.75   5.86  0.46 
#>   H:B-M:B -10.00 -25.31   5.31  0.39 
#> 
#> 
#>   RESIDUALS 
#> 
#> Fitted Values, Residuals, Standardized Residuals 
#>    [sorted by Standardized Residuals, ignoring + or - sign] 
#>    [res_rows = 20, out of 54 cases (rows) of data, or res_rows="all"] 
#> ------------------------------------------------ 
#>      tension wool breaks fitted residual z-resid 
#>    5       L    A  70.00  44.56    25.44    2.47 
#>    9       L    A  67.00  44.56    22.44    2.18 
#>    4       L    A  25.00  44.56   -19.56   -1.90 
#>    8       L    A  26.00  44.56   -18.56   -1.80 
#>    1       L    A  26.00  44.56   -18.56   -1.80 
#>   24       H    A  43.00  24.56    18.44    1.79 
#>   36       L    B  44.00  28.22    15.78    1.53 
#>    2       L    A  30.00  44.56   -14.56   -1.41 
#>   23       H    A  10.00  24.56   -14.56   -1.41 
#>   29       L    B  14.00  28.22   -14.22   -1.38 
#>   37       M    B  42.00  28.78    13.22    1.28 
#>   34       L    B  41.00  28.22    12.78    1.24 
#>   40       M    B  16.00  28.78   -12.78   -1.24 
#>   14       M    A  12.00  24.00   -12.00   -1.16 
#>   18       M    A  36.00  24.00    12.00    1.16 
#>   19       H    A  36.00  24.56    11.44    1.11 
#>   16       M    A  35.00  24.00    11.00    1.07 
#>   41       M    B  39.00  28.78    10.22    0.99 
#>   44       M    B  39.00  28.78    10.22    0.99 
#>   39       M    B  19.00  28.78    -9.78   -0.95

For a one-way ANOVA, just include one independent variable. A randomized block design is also available.

Proportions

more examples of analyzing proportions

The analysis of proportions is of two primary types.

Here, just analyze the \(\chi^2\) test of independence, which applies to two categorical variables. The first categorical variable listed in this example is the value of the parameter variable, the first parameter in the function definition, so does not need the parameter name. The second categorical variable listed must include the parameter name by.

The question for the analysis is if the observed frequencies of Jacket thickness and Bike ownership sufficiently differ from the frequencies expected by the null hypothesis that we conclude the variables are related.

d <- Read("Jackets")
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1      Bike character   1025       0       2   BMW  Honda  Honda ... Honda  Honda  BMW
#>  2    Jacket character   1025       0       3   Lite  Lite  Lite ... Lite  Med  Lite
#> ------------------------------------------------------------------------------------------
Prop_test(Jacket, by=Bike)
#> variable: Jacket 
#> by: Bike 
#> 
#> <<< Pearson's Chi-squared test 
#> 
#> --- Description
#> 
#>        Jacket
#> Bike    Lite  Med Thick  Sum
#>   BMW     89  135   194  418
#>   Honda  283  207   117  607
#>   Sum    372  342   311 1025
#> 
#>  Cramer's V: 0.319 
#> 
#>  Row Col Observed Expected Residual Stnd Res
#>    1   1       89  151.703  -62.703   -8.288
#>    1   2      135  139.469   -4.469   -0.602
#>    1   3      194  126.827   67.173    9.287
#>    2   1      283  220.297   62.703    8.288
#>    2   2      207  202.531    4.469    0.602
#>    2   3      117  184.173  -67.173   -9.287
#> 
#> --- Inference
#> 
#> Chi-square statistic: 104.083 
#> Degrees of freedom: 2 
#> Hypothesis test of equal population proportions: p-value = 0.000

Regression Analysis

more examples of regression and logistic regression

The full output is extensive: Summary of the analysis, estimated model, fit indices, ANOVA, correlation matrix, collinearity analysis, best subset regression, residuals and influence statistics, and prediction intervals. The motivation is to provide virtually all of the information needed for a proper regression analysis.

d <- Read("Employee", quiet=TRUE)
reg(Salary ~ Years + Pre)

#> >>> Suggestion
#> # Create an R markdown file for interpretative output with  Rmd = "file_name"
#> reg(Salary ~ Years + Pre, Rmd="eg")  
#> 
#> 
#>   BACKGROUND 
#> 
#> Data Frame:  d 
#>  
#> Response Variable: Salary 
#> Predictor Variable 1: Years 
#> Predictor Variable 2: Pre 
#>  
#> Number of cases (rows) of data:  37 
#> Number of cases retained for analysis:  36 
#> 
#> 
#>   BASIC ANALYSIS 
#> 
#>              Estimate    Std Err  t-value  p-value   Lower 95%   Upper 95% 
#> (Intercept) 44140.971  13666.115    3.230    0.003   16337.052   71944.891 
#>       Years  3251.408    347.529    9.356    0.000    2544.355    3958.462 
#>         Pre   -18.265    167.652   -0.109    0.914    -359.355     322.825 
#> 
#> Standard deviation of Salary: 21,822.372 
#>  
#> Standard deviation of residuals:  11,753.478 for df=33 
#> 95% range of residuals:  47,825.260 = 2 * (2.035 * 11,753.478) 
#>  
#> R-squared: 0.726    Adjusted R-squared: 0.710    PRESS R-squared: 0.659 
#> 
#> Null hypothesis of all 0 population slope coefficients:
#>   F-statistic: 43.827     df: 2 and 33     p-value:  0.000 
#> 
#> -- Analysis of Variance 
#>  
#>             df           Sum Sq          Mean Sq   F-value   p-value 
#>     Years    1  12107157290.292  12107157290.292    87.641     0.000 
#>       Pre    1      1639658.444      1639658.444     0.012     0.914 
#>  
#> Model        2  12108796948.736   6054398474.368    43.827     0.000 
#> Residuals   33   4558759843.773    138144237.690 
#> Salary      35  16667556792.508    476215908.357 
#> 
#> 
#>   K-FOLD CROSS-VALIDATION 
#> 
#> 
#>   RELATIONS AMONG THE VARIABLES 
#> 
#>          Salary Years  Pre 
#>   Salary   1.00  0.85 0.03 
#>    Years   0.85  1.00 0.05 
#>      Pre   0.03  0.05 1.00 
#> 
#>         Tolerance       VIF 
#>   Years     0.998     1.002 
#>     Pre     0.998     1.002 
#> 
#>  Years Pre    R2adj    X's 
#>      1   0    0.718      1 
#>      1   1    0.710      2 
#>      0   1   -0.028      1 
#>  
#> [based on Thomas Lumley's leaps function from the leaps package] 
#> 
#> 
#>   RESIDUALS AND INFLUENCE 
#> 
#> -- Data, Fitted, Residual, Studentized Residual, Dffits, Cook's Distance 
#>    [sorted by Cook's Distance] 
#>    [n_res_rows = 20, out of 36 rows of data, or do n_res_rows="all"] 
#> ----------------------------------------------------------------------------------------- 
#>                        Years     Pre     Salary     fitted      resid rstdnt dffits cooks 
#>       Correll, Trevon     21      97 134419.230 110648.843  23770.387  2.424  1.217 0.430 
#>         James, Leslie     18      70 122563.380 101387.773  21175.607  1.998  0.714 0.156 
#>         Capelle, Adam     24      83 108138.430 120658.778 -12520.348 -1.211 -0.634 0.132 
#>           Hoang, Binh     15      96 111074.860  91158.659  19916.201  1.860  0.649 0.131 
#>    Korhalkar, Jessica      2      74  72502.500  49292.181  23210.319  2.171  0.638 0.122 
#>        Billing, Susan      4      91  72675.260  55484.493  17190.767  1.561  0.472 0.071 
#>          Singh, Niral      2      59  61055.440  49566.155  11489.285  1.064  0.452 0.068 
#>        Skrotzki, Sara     18      63  91352.330 101515.627 -10163.297 -0.937 -0.397 0.053 
#>      Saechao, Suzanne      8      98  55545.250  68362.271 -12817.021 -1.157 -0.390 0.050 
#>         Kralik, Laura     10      74  92681.190  75303.447  17377.743  1.535  0.287 0.026 
#>   Anastasiou, Crystal      2      59  56508.320  49566.155   6942.165  0.636  0.270 0.025 
#>     Langston, Matthew      5      94  49188.960  58681.106  -9492.146 -0.844 -0.268 0.024 
#>        Afshari, Anbar      6     100  69441.930  61822.925   7619.005  0.689  0.264 0.024 
#>   Cassinelli, Anastis     10      80  57562.360  75193.857 -17631.497 -1.554 -0.265 0.022 
#>      Osterman, Pascal      5      69  49704.790  59137.730  -9432.940 -0.826 -0.216 0.016 
#>   Bellingar, Samantha     10      67  66337.830  75431.301  -9093.471 -0.793 -0.198 0.013 
#>          LaRoe, Maria     10      80  61961.290  75193.857 -13232.567 -1.148 -0.195 0.013 
#>      Ritchie, Darnell      7      82  53788.260  65403.102 -11614.842 -1.006 -0.190 0.012 
#>        Sheppard, Cory     14      66  95027.550  88455.199   6572.351  0.579  0.176 0.011 
#>        Downs, Deborah      7      90  57139.900  65256.982  -8117.082 -0.706 -0.174 0.010 
#> 
#> 
#>   PREDICTION ERROR 
#> 
#> -- Data, Predicted, Standard Error of Prediction, 95% Prediction Intervals 
#>    [sorted by lower bound of prediction interval] 
#>    [to see all intervals add n_pred_rows="all"] 
#>  ---------------------------------------------- 
#> 
#>                        Years    Pre     Salary       pred    s_pred    pi.lwr     pi.upr     width 
#>          Hamide, Bita      1     83  51036.850  45876.388 12290.483 20871.211  70881.564 50010.352 
#>          Singh, Niral      2     59  61055.440  49566.155 12619.291 23892.014  75240.296 51348.281 
#>   Anastasiou, Crystal      2     59  56508.320  49566.155 12619.291 23892.014  75240.296 51348.281 
#> ... 
#>          Link, Thomas     10     83  66312.890  75139.062 11933.518 50860.137  99417.987 48557.849 
#>          LaRoe, Maria     10     80  61961.290  75193.857 11918.048 50946.405  99441.308 48494.903 
#>   Cassinelli, Anastis     10     80  57562.360  75193.857 11918.048 50946.405  99441.308 48494.903 
#> ... 
#>       Correll, Trevon     21     97 134419.230 110648.843 12881.876 84440.470 136857.217 52416.747 
#>         Capelle, Adam     24     83 108138.430 120658.778 12955.608 94300.394 147017.161 52716.767 
#> 
#> ---------------------------------- 
#> Plot 1: Distribution of Residuals 
#> Plot 2: Residuals vs Fitted Values 
#> ----------------------------------

As with several other lessR functions, save the output to an object with the name of your choosing, such as r, and then reference desired pieces of the output. View the names of those pieces from the manual, here obtained with ?reg, or use the R names function, such as in the following example.

r <- reg(Salary ~ Years + Pre)

names(r)
#>  [1] "out_suggest"     "call"            "formula"         "vars"           
#>  [5] "out_title_bck"   "out_background"  "out_title_basic" "out_estimates"  
#>  [9] "out_fit"         "out_anova"       "out_title_mod"   "out_mod"        
#> [13] "out_mdls"        "out_title_kfold" "out_kfold"       "out_title_rel"  
#> [17] "out_cor"         "out_collinear"   "out_subsets"     "out_title_res"  
#> [21] "out_residuals"   "out_title_pred"  "out_predict"     "out_ref"        
#> [25] "out_Rmd"         "out_Word"        "out_pdf"         "out_odt"        
#> [29] "out_rtf"         "out_plots"       "n.vars"          "n.obs"          
#> [33] "n.keep"          "coefficients"    "sterrs"          "tvalues"        
#> [37] "pvalues"         "cilb"            "ciub"            "anova_model"    
#> [41] "anova_residual"  "anova_total"     "se"              "resid_range"    
#> [45] "Rsq"             "Rsqadj"          "PRESS"           "RsqPRESS"       
#> [49] "m_se"            "m_MSE"           "m_Rsq"           "cor"            
#> [53] "tolerances"      "vif"             "resid.max"       "pred_min_max"   
#> [57] "residuals"       "fitted"          "cooks.distance"  "model"          
#> [61] "terms"

View any piece of output with the name of the output file, a dollar sign, and the specific name of that piece. Here, examine the fit indices.

r$out_fit
#> Standard deviation of Salary: 21,822.372
#> 
#> Standard deviation of residuals:  11,753.478 for df=33
#> 95% range of residuals:  47,825.260 = 2 * (2.035 * 11,753.478)
#> 
#> R-squared: 0.726    Adjusted R-squared: 0.710    PRESS R-squared: 0.659
#> 
#> Null hypothesis of all 0 population slope coefficients:
#>   F-statistic: 43.827     df: 2 and 33     p-value:  0.000

These expressions could also be included in a markdown document that systematically reviews each desired piece of the output.

Time Series and Forecasting

more examples of run charts, time series charts, and forecasting

The time series plot, plotting the values of a variable cross time, is a special case of a scatterplot, potentially with the points of size 0 with adjacent points connected by a line segment. Indicate a time series by specifying the x-variable, the first variable listed, as a variable of type Date. Unlike Base R functions, Plot() automatically converts to Date data values as dates specified in a digital format, such as 18/8/2024 or related formats plus examples such as 2024 Q3 or 2024 Aug. Otherwise, explicitly use the R function as.Date() to convert to this format before calling Plot() or pass the date format directly with the ts_format parameter.

Plot() implements time series forecasting based on trend and seasonality with either exponential smoothing or regression analysis, including the accompanying visualization. Time series parameters include:

In this StockPrice data file, the date conversion as already been done.

d <- Read("StockPrice")
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> Date: Date with year, month and day
#> double: Numeric data values with decimal digits
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1     Month      Date   1419       0     473   1985-01-01 ... 2024-05-01
#>  2   Company character   1419       0       3   Apple  Apple ... Intel  Intel
#>  3     Price    double   1419       0    1400   0.100055  0.085392 ... 30.346739  30.555891
#>  4    Volume    double   1419       0    1419   6366416000 ... 229147100
#> ------------------------------------------------------------------------------------------
head(d)
#>        Month Company    Price     Volume
#> 1 1985-01-01   Apple 0.100055 6366416000
#> 2 1985-02-01   Apple 0.085392 4733388800
#> 3 1985-03-01   Apple 0.076335 4615587200
#> 4 1985-04-01   Apple 0.073316 2868028800
#> 5 1985-05-01   Apple 0.059947 4639129600
#> 6 1985-06-01   Apple 0.062103 5811388800

We have the date as Month, and also have variables Company and stock Price.

d <- Read("StockPrice")
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> Date: Date with year, month and day
#> double: Numeric data values with decimal digits
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1     Month      Date   1419       0     473   1985-01-01 ... 2024-05-01
#>  2   Company character   1419       0       3   Apple  Apple ... Intel  Intel
#>  3     Price    double   1419       0    1400   0.100055  0.085392 ... 30.346739  30.555891
#>  4    Volume    double   1419       0    1419   6366416000 ... 229147100
#> ------------------------------------------------------------------------------------------
Plot(Month, Price, filter=(Company=="Apple"), area_fill="on")
#> 
#> filter:  (Company == "Apple") 
#> -----
#> Rows of data before filtering:  1419 
#> Rows of data after filtering:   473

#> 
#> >>> Suggestions or enter: style(suggest=FALSE)
#> Plot(Month, Price, ts_ahead=4)  # exponential smoothing forecast 4 time units
#> Plot(Month, Price, ts_unit="years")  # aggregate time by yearly sum
#> Plot(Month, Price, ts_unit="years", ts_agg="mean")  # aggregate by yearly mean
#> Plot(Month, Price, filter=(Company == "Apple"), area_fill="on", size=0)  # just line segments, no points
#> Plot(Month, Price, filter=(Company == "Apple"), area_fill="on", lwd=0)  # just points, no line segments 
#>  
#> 

With the by parameter, plot all three companies on the same panel.

Plot(Month, Price, by=Company)

#> 
#> >>> Suggestions or enter: style(suggest=FALSE)
#> Plot(Month, Price, ts_ahead=4)  # exponential smoothing forecast 4 time units
#> Plot(Month, Price, ts_unit="years")  # aggregate time by yearly sum
#> Plot(Month, Price, ts_unit="years", ts_agg="mean")  # aggregate by yearly mean
#> Plot(Month, Price, by=Company, size=0)  # just line segments, no points
#> Plot(Month, Price, by=Company, lwd=0)  # just points, no line segments
#> Plot(Month, Price, by=Company, area_fill="on")  # default color fill 
#>  
#> 

Here, aggregate the mean by time, from months to quarters.

Plot(Month, Price, ts_unit="quarters", ts_agg="mean")
#> >>> Warning
#> The  Date  variable is not sorted in Increasing Order.
#> 
#> For a data frame named d, enter: 
#>     d <- sort_by(d, Month)
#> Maybe you have a  by  variable with repeating Date values?
#> Enter  ?sort_by  for more information and examples.
#> [with functions from Ryan, Ulrich, Bennett, and Joy's xts package]

#> 
#> >>> Suggestions or enter: style(suggest=FALSE)
#> Plot(Month, Price, ts_ahead=4)  # exponential smoothing forecast 4 time units
#> Plot(Month, Price, ts_unit="quarters", ts_agg="mean", size=0)  # just line segments, no points
#> Plot(Month, Price, ts_unit="quarters", ts_agg="mean", lwd=0)  # just points, no line segments
#> Plot(Month, Price, ts_unit="quarters", ts_agg="mean", area_fill="on")  # default color fill 
#>  
#> 

Plot() implements exponential smoothing or linear regression with seasonality forecasting with accompanying visualization. Parameters include ts_ahead for the number of ts_units to forecast into the future, and ts_format to provide a specific format for the date variable if not detected correctly by default. Parameter ts_method defaults to es for exponential smoothing, or set to lm for linear regression. Control aspects of the exponential smoothing estimation and prediction algorithms with parameters ts_level (alpha), ts_trend (beta), ts_seasons (gamma), ts_type for additive or multiplicative seasonality, and ts_PIlevel for the level of the prediction intervals.

To forecast Apple’s stock price, focus here on the last several years of the data, beginning with Row 400 through Row 473, the last row of data for apple. In this example, forecast ahead 24 months.

d <- d[400:473,]
Plot(Month, Price, ts_unit="months", ts_agg="mean", ts_ahead=24)
#> [with functions from Ryan, Ulrich, Bennett, and Joy's xts package]

#> 
#> >>> Suggestions or enter: style(suggest=FALSE)
#> Plot(Month, Price, ts_ahead=4, ts_seasons=FALSE)  # turn off exponential smoothing seasonal effect
#> Plot(Month, Price, ts_unit="months", ts_agg="mean", ts_ahead=24, size=0)  # just line segments, no points
#> Plot(Month, Price, ts_unit="months", ts_agg="mean", ts_ahead=24, lwd=0)  # just points, no line segments
#> Plot(Month, Price, ts_unit="months", ts_agg="mean", ts_ahead=24, area_fill="on")  # default color fill 
#>  
#>  
#> 
#> Mean squared error of fit to data: 131.41612 
#> 
#> Coefficients for Linear Trend and Seasonality 
#>  b0: 180.3470  b1: 1.0775   
#>  s1: 3.3283  s2: 8.7171  s3: 3.5309  s4: -7.0631  s5: 2.6683  s6: 8.6831   
#>  s7: 2.9660  s8: -0.1221  s9: -5.2179  s10: -1.8843  s11: 0.4437  s12: 2.0530   
#>  
#> 
#> Smoothing Parameters 
#>  alpha: 0.800  beta: 0.001  gamma: 1.000 
#>  
#>       Month predicted    upper    lower     width
#> 1  Jun 2024  184.7528 162.8443 206.6612  43.81691
#> 2  Jul 2024  191.2191 163.1582 219.2801  56.12189
#> 3  Aug 2024  187.1104 154.0147 220.2061  66.19145
#> 4  Sep 2024  177.5939 140.1277 215.0600  74.93229
#> 5  Oct 2024  188.4028 147.0195 229.7860  82.76647
#> 6  Nov 2024  195.4951 150.5294 240.4607  89.93126
#> 7  Dec 2024  190.8555 142.5676 239.1433  96.57575
#> 8  Jan 2025  188.8448 137.4444 240.2453 102.80090
#> 9  Feb 2025  184.8265 130.4871 239.1659 108.67880
#> 10 Mar 2025  189.2376 132.1061 246.3691 114.26304
#> 11 Apr 2025  192.6431 132.8457 252.4405 119.59480
#> 12 May 2025  195.3299 132.9767 257.6831 124.70644
#> 13 Jun 2025  197.6827 131.5386 263.8267 132.28817
#> 14 Jul 2025  204.1490 135.6786 272.6194 136.94074
#> 15 Aug 2025  200.0403 129.3167 270.7638 141.44712
#> 16 Sep 2025  190.5237 117.6133 263.4342 145.82087
#> 17 Oct 2025  201.3327 126.2959 276.3694 150.07358
#> 18 Nov 2025  208.4249 131.3173 285.5326 154.21529
#> 19 Dec 2025  203.7853 124.6580 282.9127 158.25470
#> 20 Jan 2026  201.7747 120.6750 282.8745 162.19947
#> 21 Feb 2026  197.7564 114.7282 280.7845 166.05634
#> 22 Mar 2026  202.1675 117.2518 287.0831 169.83131
#> 23 Apr 2026  205.5730 118.8081 292.3378 173.52972
#> 24 May 2026  208.2598 119.6816 296.8380 177.15638

Factor Analysis

more examples of exploratory and confirmatory factor analysis

Access the lessR data set called datMach4 for the analysis of 351 people to the Mach IV scale. Read the optional variable labels. Including the item contents as variable labels means that the output of the confirmatory factor analysis contains the item content grouped by factor.

d <- Read("Mach4", quiet=TRUE)
l <- Read("Mach4_lbl", var_labels=TRUE)
#> 
#> >>> Suggestions
#> Recommended binary format for data files: feather
#>   Create with Write(d, "your_file", format="feather")
#> More details about your data, Enter:  details()  for d, or  details(name)
#> 
#> Data Types
#> ------------------------------------------------------------
#> character: Non-numeric data values
#> ------------------------------------------------------------
#> 
#>     Variable                  Missing  Unique 
#>         Name     Type  Values  Values  Values   First and last values
#> ------------------------------------------------------------------------------------------
#>  1     label character     20       0      20   Never tell anyone the real reason you did something unless it is useful to do so ... Most people forget more easily the death of a parent than the loss of their property
#> ------------------------------------------------------------------------------------------

Calculate the correlations and store in here in mycor, a data structure that contains the computed correlation matrix with the name R. Extract R from mycor.

mycor <- cr(m01:m20)

R <- mycor$R

The correlation matrix for analysis is named R. The item (observed variable) correlation matrix is the numerical input into the confirmatory factor analysis.

Here, do the default two-factor solution with "promax" rotation. The default correlation matrix is mycor. The abbreviation for corEFA() is efa().

efa(R, n_factors=4)
#>   EXPLORATORY FACTOR ANALYSIS 
#> 
#> Loadings (except -0.2 to 0.2) 
#> ------------------------------------- 
#>       Factor1 Factor2 Factor3 Factor4 
#>   m06   0.828                  -0.290 
#>   m07   0.712                         
#>   m10   0.539                         
#>   m03   0.422   0.318                 
#>   m09   0.323                         
#>   m05           0.649                 
#>   m18           0.555   0.253         
#>   m13           0.543   0.226         
#>   m01           0.490                 
#>   m12           0.434  -0.230         
#>   m08           0.236  -0.202         
#>   m14           0.402   0.991  -0.401 
#>   m04                   0.426         
#>   m20           0.237  -0.282         
#>   m17                   0.267         
#>   m19                                 
#>   m11          -0.299   0.309  -0.609 
#>   m16                   0.274  -0.455 
#>   m02                          -0.319 
#>   m15  -0.207   0.203          -0.214 
#> 
#> Sum of Squares 
#> ------------------------------------------------ 
#>                  Factor1 Factor2 Factor3 Factor4 
#>      SS loadings   1.933   2.038   1.825   1.099 
#>   Proportion Var   0.097   0.102   0.091   0.055 
#>   Cumulative Var   0.097   0.199   0.290   0.345 
#> 
#>   CONFIRMATORY FACTOR ANALYSIS CODE 
#> 
#> MeasModel <-  
#> "  F1 =~ m01 + m02 + m03 + m04 + m05 
#>    F2 =~ m06 + m07 + m08 + m09 + m10 + m11 
#>    F3 =~ m12 + m13 + m14 + m15 
#>    F4 =~ m17 + m18 + m19 + m20 
#> " 
#>  
#> fit <- lessR::cfa(MeasModel)
#>  
#> library(lavaan) 
#> fit <- lavaan::cfa(MeasModel, data=d) 
#> summary(fit, fit.measures=TRUE, standardized=TRUE) 
#> 
#> Deletion threshold: min_loading = 0.2 
#> Deleted items: m16

The confirmatory factor analysis is of multiple-indicator measurement scales, that is, each item (observed variable) is assigned to only one factor. Solution method is centroid factor analysis.

Specify the measurement model for the analysis in Lavaan notation. Define four factors: Deceit, Trust, Cynicism, and Flattery.

MeasModel <- 
" 
   Deceit =~ m07 + m06 + m10 + m09 
   Trust =~ m12 + m05 + m13 + m01 
   Cynicism =~ m11 + m16 + m04 
   Flattery =~ m15 + m02 
"

Pivot Tables

more examples of pivot tables

Aggregate with pivot(). Any function that processes a single vector of data, such as a column of data values for a variable in a data frame, and outputs a single computed value, the statistic, can be passed to pivot(). Functions can be user-defined or built-in.

Here, compute the mean and standard deviation of each company in the StockPrice data set download it with lessR.

d <- Read("StockPrice", quiet=TRUE)
pivot(d, c(mean, sd), Price, by=Company)
#>   Company Price_n Price_na Price_mean Price_sd
#> 1   Apple     473        0     23.157   46.248
#> 2     IBM     473        0     60.010   43.547
#> 3   Intel     473        0     16.725   14.689

Interpret this call to pivot() as

Select any two of the three possibilities for multiple parameter values: Multiple compute functions, multiple variables over which to compute, and multiple categorical variables by which to define groups for aggregation.

Color Scales

more examples of color scales

Generate color scales with getColors(). The default output of getColors() is a color spectrum of 12 hcl colors presented in the order in which they are assigned to discrete levels of a categorical variable. For clarity in the following function call, the default value of the pal or palette parameter is explicitly set to its name, "hues".

getColors("hues")

#> 
#>       h    hex      r    g    b
#> -------------------------------
#>  1   240 #4398D0   67  152  208 
#>  2    60 #B28B2A  178  139   42 
#>  3   120 #5FA140   95  161   64 
#>  4     0 #D57388  213  115  136 
#>  5   275 #9A84D6  154  132  214 
#>  6   180 #00A898    0  168  152 
#>  7    30 #C97E5B  201  126   91 
#>  8    90 #909711  144  151   17 
#>  9   210 #00A3BA    0  163  186 
#> 10   330 #D26FAF  210  111  175 
#> 11   150 #00A76F    0  167  111 
#> 12   300 #BD76CB  189  118  203

lessR provides pre-defined sequential color scales across the range of hues around the color wheel in 30 degree increments: "reds", "rusts", "browns", "olives", "greens", "emeralds", "turqoises", "aquas", "blues", "purples", "biolets", "magentas", and "grays".

getColors("blues")

#> 
#>       h    hex      r    g    b
#> -------------------------------
#>  1   240 #CCECFFFF  204  236  255 
#>  2   240 #B4D8FCFF  180  216  252 
#>  3   240 #9DC5EBFF  157  197  235 
#>  4   240 #84B2DBFF  132  178  219 
#>  5   240 #6B9FCCFF  107  159  204 
#>  6   240 #4F8DBCFF   79  141  188 
#>  7   240 #2D7CAEFF   45  124  174 
#>  8   240 #006BA0FF    0  107  160 
#>  9   240 #005B93FF    0   91  147 
#> 10   240 #004C8AFF    0   76  138 
#> 11   240 #004087FF    0   64  135 
#> 12   240 #0040A9FF    0   64  169

To create a divergent color palette, specify beginning and an ending color palettes, which provide values for the parameters pal and end_pal, where pal abbreviates palette. Here, generate colors from rust to blue.

getColors("rusts", "blues")

#> 
#>   color    r    g    b
#> ----------------------
#> #70370FFF  112   55   15 
#> #7D4A32FF  125   74   50 
#> #8B5F4DFF  139   95   77 
#> #997568FF  153  117  104 
#> #A88E86FF  168  142  134 
#> #B9ADAAFF  185  173  170 
#> #AAB0B8FF  170  176  184 
#> #8595A7FF  133  149  167 
#> #658099FF  101  128  153 
#> #466D8DFF   70  109  141 
#> #1D5C83FF   29   92  131 
#> #004D7AFF    0   77  122

Utilities

examples of utility functions

lessR provides several utility functions for recoding, reshaping, and rescaling data.