Efficiently counting the number of lines of a text file. (200mb+)


I have just found out that my script gives me a fatal error:

Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 440 bytes) in C:\process_txt.php on line 109

That line is this:

$lines = count(file($path)) - 1;

So I think it is having difficulty loading the file into memeory and counting the number of lines, is there a more efficient way I can do this without having memory issues?

The text files that I need to count the number of lines for range from 2MB to 500MB. Maybe a Gig sometimes.

Thanks all for any help.

1/29/2010 2:26:10 PM

Accepted Answer

This will use less memory, since it doesn't load the whole file into memory:

$linecount = 0;
$handle = fopen($file, "r");
  $line = fgets($handle);


echo $linecount;

fgets loads a single line into memory (if the second argument $length is omitted it will keep reading from the stream until it reaches the end of the line, which is what we want). This is still unlikely to be as quick as using something other than PHP, if you care about wall time as well as memory usage.

The only danger with this is if any lines are particularly long (what if you encounter a 2GB file without line breaks?). In which case you're better off doing slurping it in in chunks, and counting end-of-line characters:

$linecount = 0;
$handle = fopen($file, "r");
  $line = fgets($handle, 4096);
  $linecount = $linecount + substr_count($line, PHP_EOL);


echo $linecount;
1/29/2010 2:57:40 PM

Using a loop of fgets() calls is fine solution and the most straightforward to write, however:

  1. even though internally the file is read using a buffer of 8192 bytes, your code still has to call that function for each line.

  2. it's technically possible that a single line may be bigger than the available memory if you're reading a binary file.

This code reads a file in chunks of 8kB each and then counts the number of newlines within that chunk.

function getLines($file)
    $f = fopen($file, 'rb');
    $lines = 0;

    while (!feof($f)) {
        $lines += substr_count(fread($f, 8192), "\n");


    return $lines;

If the average length of each line is at most 4kB, you will already start saving on function calls, and those can add up when you process big files.


I ran a test with a 1GB file; here are the results:

             | This answer | Dominic's answer | wc -l   |
| Lines      | 3550388     | 3550389          | 3550388 |
| Runtime    | 1.055       | 4.297            | 0.587   |

Time is measured in seconds real time, see here what real means

Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow