Statistics Dictionary
To see a definition, select a term from the dropdown text box below. The statistics
dictionary will display the definition, plus links to related web pages.

Select term:
Statistics Dictionary
Absolute Value
Accuracy
Addition Rule
Alpha
Alternative Hypothesis
ANOVA
Back-to-Back Stemplots
Balanced Design
Bar Chart
Bartlett's Test
Bayes Rule
Bayes Theorem
Bias
Biased Estimate
Bimodal Distribution
Binomial Distribution
Binomial Experiment
Binomial Probability
Binomial Random Variable
Bivariate Data
Blinding
Blocking
Blocking Variable
Bonferroni Correction
Boxplot
Cartesian Plane
Categorical Variable
Census
Central Limit Theorem
Chi-Square Distribution
Chi-Square Goodness of Fit Test
Chi-Square Statistic
Chi-Square Test for Homogeneity
Chi-Square Test for Independence
Cluster
Cluster Sampling
Coefficient of Determination
Coefficient of Multiple Determination
Column Vector
Combination
Comparisons
Complement
Completely Randomized Design
Conditional Distribution
Conditional Frequency
Conditional Probability
Confidence Interval
Confidence Level
Confounding
Contingency Table
Continuous Probability Distribution
Continuous Variable
Control Group
Convenience Sample
Correlation
Covariance
Critical Parameter Value
Critical Value
Cumulative Frequency
Cumulative Frequency Plot
Cumulative Probability
Decision Rule
Degrees of Freedom
Dependent Variable
Determinant
Deviation Score
Diagonal Matrix
Discrete Probability Distribution
Discrete Variable
Discriminant Analysis
Disjoint
Disproportionate Stratification
Dotplot
Double Bar Chart
Double Blinding
Dummy Variable
E Notation
Echelon Matrix
Effect Size
Element
Elementary Matrix Operations
Elementary Operators
Empty Set
Epsilon
Error Rate Familywise
Error Rate per Comparison
Estimation
Estimator
Event
Event Multiple
Expected Value
Experiment
Experimental Design
Extraneous Variable
F Distribution
F Statistic
Factor
Factorial
Factorial Experiment
Finite Population Correction
Fixed Effects Model
Fixed Factor
Frequency Count
Frequency Table
Full Rank
Gaps in Graphs
Geometric Distribution
Geometric Probability
Hartley's Fmax Test
Heterogeneous
Histogram
Homogeneous
Hypergeometric Distribution
Hypergeometric Experiment
Hypergeometric Probability
Hypergeometric Random Variable
Hypothesis Test
Identity Matrix
Independent
Independent Groups Design
Independent Variable
Influential Point
Inner Product
Interaction Plot
Interactions
Interquartile Range
Intersection
Interval Estimate
Interval Scale
Inverse
IQR
Joint Frequency
Joint Probability Distribution
Law of Large Numbers
Level
Line
Linear Combination of Vectors
Linear Dependence of Vectors
Linear Transformation
Logarithm
Lurking Variable
Margin of Error
Marginal Distribution
Marginal Frequency
Marginal Mean
Matched Pairs Design
Matched-Pairs t-Test
Matrix
Matrix Dimension
Matrix Inverse
Matrix Order
Matrix Rank
Matrix Transpose
Mauchly's Sphericity Test
Mean
Mean Square
Measurement Scales
Median
Mixed Model
Mode
Multicollinearity
Multinomial Distribution
Multinomial Experiment
Multiple Regression
Multiplication Rule
Multistage Sampling
Mutually Exclusive
Natural Logarithm
Negative Binomial Distribution
Negative Binomial Experiment
Negative Binomial Probability
Negative Binomial Random Variable
Neyman Allocation
Nominal Scale
Nonlinear Transformation
Non-Probability Sampling
Nonresponse Bias
Normal Distribution
Normal Random Variable
Null Hypothesis
Null Set
Observational Study
One-Sample t-Test
One-Sample z-Test
One-stage Sampling
One-Tailed Test
One-Way ANOVA
One-Way Table
Optimum Allocation
Ordinal Scale
Orthogonal Comparisons
Outer Product
Outlier
Paired Data
Parallel Boxplots
Parameter
Pearson Product-Moment Correlation
Percentage
Percentile
Permutation
Placebo
Planned Comparisons
Point Estimate
Poisson Distribution
Poisson Experiment
Poisson Probability
Poisson Random Variable
Population
Post Hoc Comparisons
Power
Precision
Probability
Probability Density Function
Probability Distribution
Probability Sampling
Proportion
Proportionate Stratification
P-Value
Qualitative Variable
Quantitative Variable
Quartile
Random Effects Model
Random Factor
Random Number Table
Random Numbers
Random Sampling
Random Variable
Randomization
Randomized Block Design
Randomized Block Experiment
Range
Ratio Scale
Reduced Row Echelon Form
Region of Acceptance
Region of Rejection
Regression
Relative Frequency
Relative Frequency Table
Repeated Measures Design
Replication
Representative
Residual
Residual Plot
Response Bias
Row Echelon Form
Row Vector
Sample
Sample Design
Sample Point
Sample Space
Sample Survey
Sampling
Sampling Distribution
Sampling Error
Sampling Fraction
Sampling Method
Sampling With Replacement
Sampling Without Replacement
Scalar Matrix
Scalar Multiple
Scatterplot
Scheffe Test
Selection Bias
Set
Significance Level
Simple Random Sampling
Simple Regression
Singular Matrix
Skewness
Slope
Sphericity
Standard Deviation
Standard Error
Standard Normal Distribution
Standard Score
Statistic
Statistical Experiment
Statistical Hypothesis
Statistics
Stemplot
Strata
Stratified Sampling
Subset
Subtraction Rule
Sum Vector
Sums of Squares
Symmetric Matrix
Symmetry
Systematic Sampling
T Distribution
T Score
T Statistic
Test Statistic
Transpose
Treatment
t-Test
Two-Sample t-Test
Two-stage Sampling
Two-Tailed Test
Two-Way Table
Type I Error
Type II Error
Unbiased Estimate
Undercoverage
Uniform Distribution
Unimodal Distribution
Union
Univariate Data
Variable
Variance
Variance Inflation Factor
Vector Inner Product
Vector Outer Product
Vectors
Venn Diagram
Voluntary Response Bias
Voluntary Sample
Y Intercept
z-score

One-Sample z-Test
A one-sample z-test is used to test whether a population parameter is
significantly different from some hypothesized value.

Here is how to use the test.

Define hypotheses.
The table below shows three sets of null and alternative hypotheses.
Each makes a statement about how the true population mean
μ is related to some hypothesized value
M .
(In the table, the symbol ≠ means " not equal to ".)

Set
Null hypothesis
Alternative hypothesis
Number of tails
1
μ = M
μ ≠ M
2
2
μ > M
μ < M
1
3
μ < M
μ > M
1

Specify significance level. Often, researchers choose
significance levels
equal to
0.01, 0.05, or 0.10; but any value between 0 and
1 can be used.
Compute test statistic. The test statistic is a z-score (z) defined by
the following equation.
z = (x
- M )
/ [ σ /sqrt(n) ]

where
x is the observed sample mean,
M is the hypothesized population mean (from the null hypothesis), and
σ is the
standard deviation
of the population.
Compute P-value. The P-value is the probability of observing a
sample statistic as extreme as the test statistic. Since the
test statistic is a z-score, use the
Normal Distribution Calculator
to assess the probability associated with the z-score.
Evaluate null hypothesis. The evaluation involves comparing the P-value to the
significance level ,
and rejecting the null hypothesis when the P-value is less than
the significance level.
The one-sample z-test can be used when the population is normally
distributed, and the population variance is known.