19

Seems to be a fairly hit issue, but I've not yet been able to find a solution; perhaps because it comes in so many flavors. Here it is though. I'm trying to read some comma delimited files (occasionally the delimiters can be a little bit more unique than commas, but commas will suffice for now).

The files are supposed to be standardized across the industry, but lately we've seen many different types of character set files coming in. I'd like to be able to set up a BufferedReader to compensate for this.

What is a pretty standard way of doing this and detecting whether it was successful or not?

My first thoughts on this approach are to loop through character sets simple->complex until I can read the file without an exception. Not exactly ideal though...

Thanks for your attention.

5
  • 2
    Detecting encodings is a very hard problem, and for some encodings, the only way to know one of them is right is through contextual analysis (which is a very non-trivial task). If you know exactly which encodings you need to support (e.g. UTF-16, UTF-8, ISO-8859-1), it may become easier, but it depends on what those encodings are. Commented Feb 7, 2012 at 18:17
  • 3
    if you don't get an exception does not necessarily mean that it was successful Commented Feb 7, 2012 at 18:17
  • the thing you mentioned about industry standards, it is the only thing you should be working on implementing more strictly. you can use the -Dfile.encoding as a jvm arg to support only a particular type of encoding Commented Feb 7, 2012 at 18:20
  • In the industry I'm in, I only have power over the standards when I create data. It sucks, but its the way it is. I can't do anything to enforce the standards. In an ideal world this would be different. --- Anyhow, programs like notepad++ (which isn't java as far as I know) seem to be able to do a better job than I can. I'd like to support ANSI, UTF-8, UTF-16, USC-2 (big & little) endian. Anything outside of that is beyond my current scope. Commented Feb 7, 2012 at 18:22
  • i would then suggest that you run the native2ascii tool on all the files before processing them. then you wont have to worry about this issue with java IO Commented Feb 7, 2012 at 18:49

1 Answer 1

11

The Mozilla's universalchardet is supposed to be the efficient detector out there. juniversalchardet is the java port of it. There is one more port. Read this SO for more information Character Encoding Detection Algorithm

Sign up to request clarification or add additional context in comments.

1 Comment

i see that the license it not apache. how different is it compared to apache ?

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.