The definition of $X$ as a random variable according to Wiki is as follows:
$Let (\Omega, \mathcal{F}, P)$ be a probability space and $(E, > \mathcal{E})$ a measurable space. Then an $(E, \mathcal{E})$-valued random variable is a function $X\colon \Omega \to E$ which is $(\mathcal{F}, \mathcal{E})$-measurable. The latter means that, for every subset $B\in\mathcal{E}$, its preimage $X^{-1}(B)\in > \mathcal{F}$ where $X^{-1}(B) = \{\omega : X(\omega)\in B\}$. This definition enables us to measure any subset B in the target space by looking at its preimage, which by assumption is measurable.
And for real-valued random variables:
In this case the observation space is the real numbers. Recall, $(\Omega, \mathcal{F}, P)$ is the probability space. For real observation space, the function $X\colon \Omega \rightarrow > \mathbb{R}$ is a real-valued random variable if:
$\{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \in > \mathbb{R}$.
Now in statistics and fields alike, they introduce random variables like $X \sim p(x)$ where $p(x)$ is a probability distribution. My question is if you say that $X\sim p(x)$ and $Y\sim p(x)$ how can these two represent two different random variables (like two different standard normal random variables) when they are sampled from the same $p(x)$, viz. how should you translate this to the formal measure theoretic definition that could differentiate between these two?