Thursday, June 20, 2013

Now that we have better data...

...let's see what's happening.



First, the single sample point faked us out, making us think we were using less RAM than we really were. We are chewing up pretty nearly ten megabytes--still less than that C program, but a bit disappointing.

Or maybe more disappointing than we thought. Here's the output describing heap usage from a run with the options +RTS -sstderr:

     202,392,800 bytes allocated in the heap
      84,601,736 bytes copied during GC
      12,550,176 bytes maximum residency (8 sample(s))
         220,824 bytes maximum slop
              30 MB total memory in use (0 MB lost due to fragmentation)

                                    Tot time (elapsed)  Avg pause  Max pause
  Gen  0       387 colls,     0 par    0.10s    0.10s     0.0003s    0.0009s
  Gen  1         8 colls,     0 par    0.06s    0.06s     0.0080s    0.0230s

  INIT    time    0.00s  (  0.00s elapsed)
  MUT     time    0.19s  (  0.20s elapsed)
  GC      time    0.16s  (  0.16s elapsed)
  RP      time    0.00s  (  0.00s elapsed)
  PROF    time    0.00s  (  0.00s elapsed)
  EXIT    time    0.00s  (  0.00s elapsed)
  Total   time    0.36s  (  0.36s elapsed)

  %GC     time      45.2%  (45.3% elapsed)

  Alloc rate    1,028,021,469 bytes per MUT second

  Productivity  54.8% of total user, 55.0% of total elapsed


Whoa. Nearly half our time spent garbage collecting? There's got to be a way to improve that. (And on the graph, nearly two megabytes taken by pairsum?)

No comments:

Riddler Classic, May 23, 2020—Holy Mackerel!

Another one using Peter Norvig's word list . It turns out that the word "mackerel" has a curious property: there is exactly ...