The full answer is much more complicated than what dasblinkenlight is suggesting.
Since Java 5, the data type char does not represent a character or Unicode codepoint anymore, but a UTF-16 encoded value, which might be a complete character or a fraction of a character. This UTF-16 value is in reality just a 16-bit unsigned integer in the range 0 to 65535 and will be casted automatically to an int when used as an array index, just as the other numeric datatypes like short or byte. If you really want a Unicode codepoint as a character, you should use the method codePointAt(int index) instead of charAt(int index). The Unicode code point can be in the range 0 to 1114111 (0x10ffff).
How the methods charAt and codePointAt methods work internally is implementation specific. It is often incorrectly claimed that a String is just a wrapper around an array of chars, but the internal implementation of the String class is not mandated by the language or API specification. Since Java 6, the Oracle VM has been using different optimization strategies to save memory and is not always using a plain char array.