Skip to main content

Filereader null declarations and appending best practice



I want to optimise my file reader function but am not sure if it is best practice to declare the nulls outside of the try loop. Also, is looping and appending chars to a Stringbuffer considered bad practice? I would like to use the exception handling here, but maybe it is better to use another structure? any advice most welcome, thanks.







public String readFile(){

File f = null;

FileReader fr = null;

StringBuffer content = null;

try{

f = new File("c:/test.txt");

fr = new FileReader(f);

int c;

while((c = fr.read()) != -1){

if(content == null){

content = new StringBuffer();

}



content.append((char)c);

}



fr.close();

}

catch (Exception e) {

throw new RuntimeException("An error occured reading your file");

}

return content.toString();

}







}


Comments

  1. Advice:


    Indent your code properly. The stuff in your question looks like a dog's breakfast.
    You don't need to initialize f inside the try / catch block. The constructor can't throw an Exception the way you are using it.
    In fact, you don't need to declare it at all. Just inline the new File(...).
    In fact, you don't even need to do that. Use the FileReader(String) constructor.
    There's no point initializing the StringBuffer inside the loop. The potential performance benefit is small and only applies in the edge-case where the file is empty or doesn't exist. In all other cases, this is an anti-optimization.
    Don't catch Exception. Catch the exceptions that you expect to be thrown and allow all other exceptions to propagate. The unexpected exceptions are going to be due to bugs in your program, and need to be handled differently from others.
    When you catch an exception, don't throw away the evidence. For an unexpected exception, either print / log the exception, its message and its stacktrace, or pass it as the 'cause' of the exception that you throw.
    The FileReader should be closed in a finally clause. In your version of the code, the FileReader won't be closed if there is an exception after the object has been created and before the close() call. That will result in a leaked file descriptor and could cause problems later in your application.
    Better yet, use the new Java 7 "try with resource" syntax which takes care of closing the "resource" automatically (see below).
    You are reading from the file one character at a time. This is very inefficient. You need to either wrap the Reader in a BufferedReader, or read a large number of characters at a time using (for example) read(char[], int, int)
    Use StringBuilder rather than StringBuffer ... unless you need a thread-safe string assembler.
    Wrapping exceptions in RuntimeException is bad practice. It makes it difficult for the caller to handle specific exceptions ... if it needs to ... and even makes printing of a decent diagnostic more difficult. (And that assumes that you didn't throw away the original exception like your code does.)


    Note: if you follow the advice of point 8 and not 9, you will find that the initialization of fr to null is necessary if you open the file in the try block.



    Here's how I'd write this:

    public String readFile() throws IOException {
    // Using the Java 7 "try with resource syntax".
    try (FileReader fr = new FileReader("c:/test.txt")) {
    BufferedReader br = new BufferedReader(fr);
    StringBuilder content = new StringBuilder();
    int c;
    while ((c = br.read()) != -1) {
    content.append((char)c);
    }
    return content.toString();
    }
    }


    A further optimization would be to use File.length() to find out what the file size (in bytes) is and use that as the initial size of the StringBuilder. However, if the files are typically small this is likely to make the application slower.

    ReplyDelete
  2. public String readFile() {
    File f = new File("/Users/Guest/Documents/workspace/Project/src/test.txt");
    FileReader fr = null;
    BufferedReader br = null;
    StringBuilder content = new StringBuilder();;
    try {
    fr = new FileReader(f);
    br = new BufferedReader(fr);
    //int c;
    //while ((c = fr.read()) != -1) {
    //content.append((char) c);
    //}
    String line = null;
    while((line = br.readLine()) != null) {
    content.append(line);
    }
    fr.close();
    br.close();
    } catch (Exception e) {
    // do something

    }
    return content.toString();
    }


    Use buffered reader and youll get 70%+ improvement, use string builder instead of string buffer unless you need syncronization.

    ran it on a 10MB file 50 times and averaged


    no need to put anything that does not need exception handling inside try
    no need for that if clause because it will be true only once and so you're wasting time - checking it for every character
    there is no runtime exceptions to throw.


    results:
    fastest combination to slowest:


    string builder and buffered reader line by line: 211 ms
    string buffer and buffered reader line by line: 213 ms
    string builder and buffered reader char by char: 348 ms
    string buffer and buffered reader char by char: 372 ms
    string builder and file reader char by char: 878
    string buffer and file reader char by char: 935 ms
    string: extremely slow


    so use string builder + buffered reader and make it read line by line for best results.

    ReplyDelete

Post a Comment

Popular posts from this blog

[韓日関係] 首相含む大幅な内閣改造の可能性…早ければ来月10日ごろ=韓国

div not scrolling properly with slimScroll plugin

I am using the slimScroll plugin for jQuery by Piotr Rochala Which is a great plugin for nice scrollbars on most browsers but I am stuck because I am using it for a chat box and whenever the user appends new text to the boxit does scroll using the .scrollTop() method however the plugin's scrollbar doesnt scroll with it and when the user wants to look though the chat history it will start scrolling from near the top. I have made a quick demo of my situation http://jsfiddle.net/DY9CT/2/ Does anyone know how to solve this problem?

Why does this javascript based printing cause Safari to refresh the page?

The page I am working on has a javascript function executed to print parts of the page. For some reason, printing in Safari, causes the window to somehow update. I say somehow, because it does not really refresh as in reload the page, but rather it starts the "rendering" of the page from start, i.e. scroll to top, flash animations start from 0, and so forth. The effect is reproduced by this fiddle: http://jsfiddle.net/fYmnB/ Clicking the print button and finishing or cancelling a print in Safari causes the screen to "go white" for a sec, which in my real website manifests itself as something "like" a reload. While running print button with, let's say, Firefox, just opens and closes the print dialogue without affecting the fiddle page in any way. Is there something with my way of calling the browsers print method that causes this, or how can it be explained - and preferably, avoided? P.S.: On my real site the same occurs with Chrome. In the ex