Free Webinars

Matched Pair Tests for Data With Nondetects
April 21, 2020.
You no longer need to register for the webinar. A recorded video will be posted on our Training Center
https://practicalstats.teachable.com

and can be streamed from there at no charge any time beginning on the above date.

Webinars on our Online Training Center
Stream our 'technical' webinars, for a taste of what is contained in our online courses!
The webinars are listed as "free courses". Each takes about 60 minutes.
Q&A files as an Excel document, and slides as a pdf document, can also be downloaded from the Training Center.

Webinars connected to our Nondetects And Data Analysis course:
1. Intro to Nondetects and Data Analysis
An introduction to data analysis for variables with nondetects.
Materials from the webinar (including slides and info) can also be downloaded as a zip file:

2. Fitting Distributions to Data with Nondetects
How to decide which distribution best fits your data. Making the most of small datasets with nondetects.

3. Testing Groups of Data With Multiple DLs
Analogs to Analysis of Variance and the Kruskal-Wallis tests for data with nondetects at multiple detection limits. Also, how to perform multiple comparison tests with nondetects.

4. The Mystery of Nondetects: How Censored Data Methods Work
Substitution of a constant times the reporting/detection limit (for example 1/2 DL) introduces bias into estimates of mean, standard deviation and upper confidence limits. The better alternative is to use methods for censored data. How these methods work is not widely understood by the environmental science community. The most frequent question I am asked about them is "But what number do I put in for the nondetects when I use them?" The answer is "you don't". The reasons why this is so, and how these methods work, will be presented in this webinar.

5. Correlation and Regression for Data with Nondetects
You can do it all, without substituting fabricated values.

6. Trend Analysis for Data with Nondetects
Are concentrations changing over time? Can I tell even when there are multiple detection limits used?
Parametric and nonparametric methods for data with nondetects, including the Seasonal Kendall test for trend.

7. Incorporating Greater Than and Less Than Values in Data Analysis
One way of representing censored data in a database is the "interval endpoints" format. Two columns are used with the first being the low end of possible values for the variable (often 0 for censored chemical data) and the second column holding the highest possible values (often the detection limits). One benefit of storing data this way is that it allows 'greater thans' to also be stored in the same two columns. Most censored methods for data analysis can incorporate both 'less thans' and 'greater thans' as interval-censored data and compute everything from means to hypothesis tests and regression. This webinar will give you examples of how to do these types of analyses.

Webinars related to our Applied Environmental Statistics courses:
8. Intro to R
Break down the barrier of how to get started using R!
R is one of the most widely used statistics software packages in the world. Its versatility as a programming language and its interconnectivity with email, web page generation and other computer processes make it a bit daunting for people just starting to use it for data analysis. It need not be that way. This webinar introduces you to R software and its use for data analysis. You'll learn how to type commands, install and load packages, and use the pull-down menus of R Commander (Rcmdr) to compute confidence intervals and a test for whether the mean exceeds a numerical standard.

9. Never Worry About A Normal Distribution Again!
Permutation Tests and Bootstrapping
Traditional parametric tests for differences in means (Analysis of Variance, t-tests and more) as well as t-intervals require data within groups to follow a normal distribution. If this isn't so, p-values may be inflated so that differences in means are not detected, and confidence intervals are often too wide. Permutation tests and bootstrap intervals avoid the normality assumption, returning accurate p-values and interval widths while being distribution-free. These methods are widely used in a variety of applied statistics fields including environmental science, but have not been sufficiently used in water quality, air quality and soils applications. This webinar will describe how these methods work, where you can find them, and demonstrate their benefits over older traditional methods.

10. Which of These Things is Not Like the Others?
How Multiple Comparison Tests Work
Multiple comparison tests determine which groups differ from others. Why are they needed following an ANOVA or Kruskal-Wallis test? How do they work? There are familiar types such as Tukey's test, and a newish version called the False Discovery Rate. Learn why the False Discovery Rate is a method you should probably be using.

Less technical videos on environmental statistics. At our Videos page.
Free to stream and watch.

1. Forty Years of Water Quality Statistics: What's Changed, What Hasn't?
An overview of how methods have changed from 1980 - 2019 in interpreting water quality data. Some folks are still using methods from the era of black rotary-dial phones. You've upgraded your phone. How about updating your statistical methods?

2. How Many Observations Do I Need?
One of the most common questions I am asked is “How many observations do I need to compute a confidence interval or find a difference in a hypothesis test?” To answer this you'll need to know quite a bit of information first. This webinar will go over what information is needed for two-group parametric and nonparametric hypothesis tests (t-test and Wilcoxon rank-sum test). More information is provided in the new Second Edition of Statistical Methods in Water Resources [published by the US Geological Survey].

3. Seven Perilous Errors in Environmental Statistics
Seven common errors to avoid!
Seven common errors in statistical analysis by environmental scientists all stem from an outdated understanding of statistics. I'll define the seven 'perilous errors' and how each can be avoided. They revolve around old ideas about hypothesis tests, p-values, using logarithms of data, evaluating what is a good regression equation, evaluating outliers and dealing with nondetects. Understanding why each error is perilous can save the scientist from publishing incorrect statements, using inefficient analysis methods, and wasting scarce financial resources. These errors have persisted through the years -- break the cycle and step into the 21st Century.

Plus VIDS:
short (15 min or less) videos on practical topics