Netflix Prize: Number of ratings per customer

Does anyone have a good distribution for characterizing the number of ratings per customer? Graphing the histogram log-log (where the vertical axis is density, not count) it’s clearly not the straight line implied by a power law:

Histogram of the number of ratings per customer

The red line is the gamma distribution fit by the method of moments. It’s way too high on the left, and way too low on the right. Can anyone think of anything better?

This thread on the Netflix Prize forums:

Posted in Random Thoughts | Leave a comment

Game Design PowerPoint Now Online

I’ve been giving a talk on game design which has been surprisingly well received. Several people have asked me for a copy of my PowerPoint presentation. The problem was, my PowerPoint file had lots of images, but little text. I think that’s the most engaging format – certainly TV shows don’t just display notes on the screen while someone talks. But the down side was that, for those who wanted to show the ideas to their friends, the PowerPoint just wasn’t sufficient.

So, I’ve created a new one, which is just my notes on top of the images. The images become kind of hard to see, but hey, if you want the full effect, you’ll have to come to my talk. 🙂

Posted in Random Thoughts | Leave a comment

Giving Grey Thumb talk tomorrow

I’m kicking off a Capture The Flag bot contest at Grey Thumb tomorrow. You can find more details at the breve Capture The Flag page. I’ll post my PowerPoint slides as soon as their done.

Update: The slides and simple example bots are now available here.

Posted in Random Thoughts | Leave a comment

Two New Projects

I’ve added two new projects to the Projects page. The first is something my son Miles likes, it plays sounds whenever a key is hit. The second is some notes on creating AI for Sodarace. At the moment, it’s just about how to interface a program to the Sodaracer.

Posted in Random Thoughts | Leave a comment

Models break down when you exploit them

Models break down when you exploit them. Examples:

  • “Number of lines of debugged code” is a good measure of programmer productivity, until you tie programmer reward to it. The programmers write very verbose code.
  • Stock price is a good measure of the performace of a company, unless you tie management’s reward to it by giving them stock options. Then they manipulate the stock price, as happened in the ’90s at places like Enron and Worldcom.
  • Macroeconomic models, for example that high unemployment coincided with low wage inflation and vice versa, are valid until people try to exploit them.
    Such trade-offs, [Robert Lucas] argued, existed only if no one expected policymakers to exploit them. Unanticipated inflation would erode the real value of wages, making workers cheaper to hire. But if central bankers tried to engineer such a result, by systematically loosening monetary policy, then forward-looking workers would pre-empt them, raising their wage claims in anticipation of higher inflation to come. Cheap money would result in higher prices, leaving unemployment unchanged.
    – The Economist July 13th 2006, Big questions and big numbers

What does this imply for adaptive behavior? When a creature discovers a relationship, that may hold as long as it doesn’t try to exploit it.

Posted in Adaptive Behavior - Tidbits

Talking at Boston Postmortem

I’ll be giving my talk on game design, Just One More Game…, at the Boston Postmortem tomorrow. Here’s hoping it goes over well!

Posted in Game Design | Leave a comment

Philosophy of AI

A while ago I gave a talk at the “Dangerous Ideas Seminar” at MIT’s AI Laboratory. It was called “Breaking Out of the Black Box.” It was about how people tend to create systems out of black boxes, yet systems designed by nature have more interdependence between the parts.

A little later, at the Indie A.I. seminar, I gave the talk “Taking Vague Boundaries Seriously.”

Posted in Adaptive Behaviour | Leave a comment

Work on Multiple Projects at Once?

Garth Zeglin, a good friend from Grad school, was staying with me for a few days, and he mentioned that he likes to have two projects going at once.  That way, when he gets stuck on one, he can switch to the other for a while.  I wonder how this would work in practice?  How could it work at work?  And how does it interact with the advice I once heard from Bryan Adams, a grad student in Rodney Brooks' lab at MIT?  He said when you stop working for the day, you shouldn't completely finish a task, you should leave something very straight forward to do.  That way, when you're starting the next day, you can be eased back into "the zone."

Posted in Brain Rental, Process | 1 Comment

Many Smaller Error Measures Better Than A Single Overall Measure

Eric Bonabeau pointed me to an interesting article on calibrating dynamic models: “Model calibration as a testing strategy for system dynamics models” by Rogelio Oliva in the European Journal of Operational Research (v. 151, 2003, pp. 552-568). In it, he points out some problems with the global “estimate parameters by minimizing an overall score:”

  • It often makes sense to estimate parameters of a subsystem based on that subsystem’s performance. “Graham (1980) provides two strong reasons for estimating parameters with data below the level of aggregation. First, most factual knowledge about the system falls into the category of unaggregate data, and parameters can be estimated independently of model equations. Second, parameters that are directly observable, or that can be estimated from unaggregate data – records, interviews, etc. – increase the models ability to anticipate events outside of the historical experience, and are intrinsically more robust than parameters that are inferred from observed behavior.”[Thought: what about minimizing the weighted sum of the overall error, and the subsystem error? That might have the right properties: the subsystem parameters are “mostly” controlled by the subsystem error, but they can be “tweaked” a bit to reduce the overall error.]
  • “Since not all model parameters affect all output variables in the model, as the number of data series in the error function increases, individual parameters become less significant; variations in individual parameter values have a small impact in a complex error function, thus resulting in wider confidence intervals for the estimated parameters. Similarly, increasing the number of parameters to be estimated through an error function reduces the degrees of freedom in the estimation problem, thus resulting in wider confidence intervals, i.e., less efficient estimators.”

Maybe it’s time to ditch the idea of a single task with a single error measure? It doesn’t really fit into the idea of a living being that performs many tasks, reusing parts of one task in another, and often performing several tasks at once.

It fits well with the cybernetics style “many interacting control loops,” especially if you consider a control loop as trying to optimize some cost function, rather than maintaining a set point.  It also fits well with my video game ideas, of having a model, goals and strategies at many levels.  Where does this leave the MDL/Reinforcement Learning approach?

Graham, A.K., 1980. Parameter estimation in system dynamics modeling. In: Randers, J. (Ed.), Elements of the System Dynamics Method. Productivity Press, Cambridge, MA, pp. 143–161.

Posted in Adaptive Behavior - Tidbits