So I decided to create a dummy version of the program that instead of calculating the number of "fair and square" numbers in a given range, just adds the low and high end of the range and prints it out, using the format the Code Jam problem requires:
numXs :: Integer -> Integer -> Integer
numXs a b = a + b
-- jagd's main, which I believe is set up to read stdin
solve i num = do
[a, b] <- getLine >>= return . map read . words
putStrLn $ "Case #" ++ show i ++ ": " ++ (show $ numXs a b)
if i < num
then solve (i+1) num
else return ()
main = do
num <- getLine >>= return.read
solve 1 num
Compiled it up with -O2 and ran...
time ./dummy <C-large-practice-2.in >woof.dummy
real 0m0.106s
user 0m0.092s
sys 0m0.012s
Here the variance is a hefty factor; one run took 0.171 seconds!
The surprising part of the profiling was the memory allocation:
total alloc = 196,851,848 bytes (excludes profiling overheads)
Compare that with the real program:
total alloc = 546,422,264 bytes (excludes profiling overheads)
The full program is allocating lists of powers of ten, and potentially over
UPDATE: There must be a difference between "total alloc" and memory usage. Running dummy with the options to track memory usage shows memory usage peaking at maybe 60K, compared with the palindrome program, which peaks quickly at around 4M. Both programs must be allocating and freeing quite a lot, the difference being that we save up those lists of Ys and powers of ten.
No comments:
Post a Comment