First it's important to understand that no test can confirm the i.i.d. assumption. All that tests can do is to indicate against their null hypotheses (H0) or not; the latter doesn't mean that the H0 (e.g., "data are i.i.d.") is true.
The i.i.d. assumption can only be tested against certain specific alternatives, i.e., any test requires a specification in which way i.i.d. might be violated. Also one wouldn't test "i.i.d." as a whole but rather test independence and identical distributions separately.
Here are some tests:
The runs test tests independence against a dependence alternative that implies that high values are more likely to occur in a row, also low values, than in a random sequence. Also the opposite alternative (after a high value it is more likely to observe a low value and the other way round) can be tested in this way.
Another possibility would be to fit time series models and to test whether all the parameters that produce dependence/correlation between observations are zero.
Note that both of these tests test dependence patterns that manifest themselves through the time order of observations. If you hypothesise that certain given groups of observations are positively dependent, you could fit a random effect or mixed effects model and test whether the random effect (that models dependence) is actually a constant.
Note also that general dependence patterns can be so complex that they cannot be ruled out by any amount of data, and even some quite simple dependence structures cannot be identified from the data. You always need a specific idea how dependence is supposed to work out.
The situation with "the other i", i.e., the "identical distribution" is similar. You may specifically fit a regression model where your samples are modelled as functions of some covariates (generating different distributions for different $X_i$ with different covariate values), and then test whether the regression (slope) coefficient is zero (i.e., only a constant plus i.i.d. error term remains). There are also tests for heteroscedasticity, i.e., growing or shrinking variance over time or depending on some covariate, which obviously also is a specific violation of "identity" of distributions (googling finds more heteroscedasticity tests than those mentioned in the link). As in the independence case, general structures of non-identity can be so flexible that they can't be detected from the data (e.g., you may have a model that says that whatever is observed at a certain time point has to be observed there with probability 1, and this can never be rejected, to give a rather bizarre example).
Unfortunately, if you want to test the i.i.d. assumption in order to make sure that your data truly are (at least approximately) i.i.d., you will be sabotaged by the Misspecification Paradox, which says that misspecification testing of model assumptions will actively violate the model assumptions even if they were not violated before. Particularly regarding independence, the annoying fact is that if you have a truly independent sequence, but only apply a method assuming i.i.d. to it in case that an independence test doesn't reject independence, the observations conditionally on not rejecting independence are actually dependent! This is because if you use the runs test, say, and one observation before the end of the sequence the result is borderline significant, under the condition of non-rejection you know that the last observation needs to come in so that the result is pulled to insignificance. This creates dependence between observations, as the last observation cannot freely vary (as it needs to ensure that the test doesn't reject).
So testing i.i.d. in order to make sure your observations are fine for a certain method is potentially problematic even as far as it can be done (although arguably not detecting big and critical problems with i.i.d. may be more harmful than the misspecification paradox).
To some extent assuming i.i.d. is always a judgement call.