Is there a single test one can perform that would be significant if two coefficients in a linear model are both different from 0, but would not be significant if only one of them differs from 0? Yes one can look at the model summary, but this does not give you a p-value for the null hypothesis that one or none are significant. I am aware of methods for the null that neither is significant (e.g., car::linearHypothesis()), but this would be significant if only one coefficient differs from 0, which is not what I'm interested in.
An example is below. I am looking for a method that is significant for y1 but not y2.
set.seed(1234)
dat = MASS::mvrnorm(n = 500,mu = rep(0,4),
Sigma = matrix(nrow = 4,byrow = TRUE,
c(1, .2, .2, .3,
.2, 1, .2, 0,
.2, .2, 1, 0,
.3, 0, 0, 1) )) |> as.data.frame()
colnames(dat) = c("x1","x2","y1","y2")
summary(lm(y1 ~ x1 + x2,data = dat))
Call:
lm(formula = y1 ~ x1 + x2, data = dat)
Residuals:
Min 1Q Median 3Q Max
-2.70450 -0.63760 -0.01951 0.68030 2.58319
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.04704 0.04171 1.128 0.2599
x1 0.12099 0.04271 2.833 0.0048 **
x2 0.25624 0.04254 6.024 3.31e-09 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9324 on 497 degrees of freedom
Multiple R-squared: 0.1052, Adjusted R-squared: 0.1016
F-statistic: 29.23 on 2 and 497 DF, p-value: 9.978e-13
summary(lm(y2 ~ x1 + x2,data = dat))
Call:
lm(formula = y2 ~ x1 + x2, data = dat)
Residuals:
Min 1Q Median 3Q Max
-3.4124 -0.5711 -0.0436 0.5844 3.2467
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.036480 0.041036 -0.889 0.374
x1 0.242923 0.042027 5.780 1.32e-08 ***
x2 0.002013 0.041852 0.048 0.962
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9174 on 497 degrees of freedom
Multiple R-squared: 0.06828, Adjusted R-squared: 0.06454
F-statistic: 18.21 on 2 and 497 DF, p-value: 2.328e-08


