Showing posts from May 5, 2013

So after all that...

What would be our ultimate solution for the problem? Given the previous post, we have two cases:
m' and n' both have d digits: in this case, if m' is greater than the largest d-digit Y, the result is 0; otherwise, we have to actually generate the d-digit Y values and return the number that are in range.m' has d digits and n' has d' digits, d < d': the result then is the sum of the number of Ys between m' and 10d-1, the sum of the number of k-digit Ys for k from d + 1 to d' - 1, and the number of Ys between 10d'+1 and n'. We need smarter memoization of the function that maps d to the list of d-digit Ys and the function that maps d to the count of d-digit Ys, that doesn't grind out all the results for all values from 1 to d. (If they were recursive in d, I wouldn't mind.) Time to do some research.

UPDATE: or maybe I have a misconception about lazy evaluation. If the list elements don't depend on one another save for their positio…


You know, for the Code Jam problem,  you just have to know how many "fair and square" numbers there are in a given interval. That's not necessarily the same thing as having to generate all of them.

Having done (or read) the analysis, we know that the number of "fair and square" numbers between m and n is the number of palindromic numbers between ⌈sqrt(m)⌉ and  ⌊sqrt(n)⌋ (we've shown how to get those values in previous posts)--let's call those values m' and n' respectively--for which the sum of the squares of the digits is less than 10.

Let's just consider d-digit base 10 numbers for d > 1, so we don't have to worry about 0 or 3. If [m'..n'] includes all d-digit base 10 numbers, or, given our theorem, at least the d-digit base 10 numbers from 10...01 to either 20...02 (if d is even) or 20..010...02 (if d is odd), then it has all the d-digit palindromes of the sort we want. (Writing them that way is serious handwaving about how …

Compare and contrast my English teachers used to say.
"Finally, although the subject is not a pleasant one, I must mention PL/1, a programming language for which the defining documentation is of a frightening size and complexity. Using PL/1 must be like flying a plane with 7000 buttons, switches and handles to manipulate in the cockpit. I absolutely fail to see how we can keep our growing programs firmly within our intellectual grip when by its sheer baroqueness the programming language —our basic tool, mind you!— already escapes our intellectual control. And if I have to describe the influence PL/1 can have on its users, the closest metaphor that comes to my mind is that of a drug. I remember from a symposium on higher level programming language a lecture given in defense of PL/1 by a man who described himself as one of its devoted users. But within a one-hour lecture in praise of PL/1. he managed to ask for the addition of about fifty new “features”, little supposing that the mai…

Good company

Just saw something on Hacker News pointing to a tweet from John Carmack:

"I want to do a moderate sized Haskell project, and not fall to bad habits. Is there a good forum for getting code review/criticism/coaching?"

From a later tweet from Mr. Carmack: "The Haskell code I started on is a port of the original Wolf 3D...."

I'm sure I'm not at his level, but it's reassuring to be in good company... as I feel I am in my opinion of C++.

UPDATE: here's an HN link. This comment is particularly interesting; I'll have to keep an eye open for new ghc releases.

More attempts at optimization

I set up for profiling, and I made two changes. First, I put back the memoization of powers of ten. Second, it occurred to me that most of the time I invoke oddDigitPal, I'm invoking it twice with just the middle digit changing. So, I went for

oddDigitsPals :: Integer -> Int -> [Integer] -> [Integer]

oddDigitsPals topHalf nDigits middles =
    let noMiddle = topHalf * tenToThe (nDigits + 1) + backwards topHalf
    in [noMiddle + m * tenToThe nDigits | m <- middles]

I also changed over to Int (32-bit integers) for numbers of digits. I'm not worried at this point about going for fair and square numbers of over two billion digits. (Little John, you're on your own after that, OK?)

It did make some difference, based on profiling output. Most notably, it took about 14% off bytes allocated. Time was less impressive; with the profiling pulled, it took execution time down from not quite 1.9 seconds to not quite 1.7 seconds.

A C solution that I downloaded, compiled, and ran …

Just goes to show that Knuth was right

One other thing occurred to me as a possible optimization: I'm raising 10 to some integer power a lot. Why not try

powersOfTen = [10 ^ i | i <- [0..]]

tenToThe :: Int -> Integer

tenToThe n = powersOfTen !! n

which would memoize those pesky exponentiations? (!! lets you retrieve elements from lists sort of as if they were arrays, with "subscripts" starting at zero.)

It was easy enough to try out, but the results were disappointing. Even on my Eee 900A, with a 32-bit processor that you'd think would get the most benefit, the variations in time output from one run to the next were large enough that I can't say with certainty that it made any difference at all. Time output for the first large data set:

real    0m1.093s
user    0m1.068s
sys     0m0.016s

For the second large data set:

real    0m5.531s
user    0m5.472s
sys     0m0.044s

These are with the program compiled--I still haven't done the file opening code.

One thing that doesn't carry immediately over

One thing that you can do with a compiled Haskell program that doesn't lend itself to ghci is I/O redirection. That's why I have yet to run the large data sets against the code running under the interpreter--I'll have to modify main to take a file name and use it as input source.

I will do it, though. I'm highly motivated, because it's related to what caused me to start all this in the first place.

Since I don't have Code Jam time limits...

...I should improve the style.

Lisp is a wonderful language. It's the first (largely) applicative language. Having a simple form that all language constructs follow means you can write seriously powerful tools to manipulate programs without wasting your time on convoluted parsing.

Compare that with the horrors of parsing C++, which strictly speaking is impossible! Thanks to templates, you have to solve the halting problem to successfully parse C++. Even ignoring that, there are ambiguous constructs, with a rule that says which way to decide in the presence of ambiguity. I suspect it all goes back to Stroustrup's eschewing formal language-based tools when writing that first ad hoc preprocessor for what was then "C with Classes". (BTW, Perl has the same problem. Perhaps there's something about kitchen sink languages.)

All that said, Haskell style isn't Lisp style. Any serious Haskell programmer would look at my palindrome program and say I lean far too heavily …