Work on Multiple Projects at Once?

Garth Zeglin, a good friend from Grad school, was staying with me for a few days, and he mentioned that he likes to have two projects going at once.  That way, when he gets stuck on one, he can switch to the other for a while.  I wonder how this would work in practice?  How could it work at work?  And how does it interact with the advice I once heard from Bryan Adams, a grad student in Rodney Brooks' lab at MIT?  He said when you stop working for the day, you shouldn't completely finish a task, you should leave something very straight forward to do.  That way, when you're starting the next day, you can be eased back into "the zone."

Posted in Brain Rental, Process | 1 Comment

Many Smaller Error Measures Better Than A Single Overall Measure

Eric Bonabeau pointed me to an interesting article on calibrating dynamic models: “Model calibration as a testing strategy for system dynamics models” by Rogelio Oliva in the European Journal of Operational Research (v. 151, 2003, pp. 552-568). In it, he points out some problems with the global “estimate parameters by minimizing an overall score:”

  • It often makes sense to estimate parameters of a subsystem based on that subsystem’s performance. “Graham (1980) provides two strong reasons for estimating parameters with data below the level of aggregation. First, most factual knowledge about the system falls into the category of unaggregate data, and parameters can be estimated independently of model equations. Second, parameters that are directly observable, or that can be estimated from unaggregate data – records, interviews, etc. – increase the models ability to anticipate events outside of the historical experience, and are intrinsically more robust than parameters that are inferred from observed behavior.”[Thought: what about minimizing the weighted sum of the overall error, and the subsystem error? That might have the right properties: the subsystem parameters are “mostly” controlled by the subsystem error, but they can be “tweaked” a bit to reduce the overall error.]
  • “Since not all model parameters affect all output variables in the model, as the number of data series in the error function increases, individual parameters become less significant; variations in individual parameter values have a small impact in a complex error function, thus resulting in wider confidence intervals for the estimated parameters. Similarly, increasing the number of parameters to be estimated through an error function reduces the degrees of freedom in the estimation problem, thus resulting in wider confidence intervals, i.e., less efficient estimators.”

Maybe it’s time to ditch the idea of a single task with a single error measure? It doesn’t really fit into the idea of a living being that performs many tasks, reusing parts of one task in another, and often performing several tasks at once.

It fits well with the cybernetics style “many interacting control loops,” especially if you consider a control loop as trying to optimize some cost function, rather than maintaining a set point.  It also fits well with my video game ideas, of having a model, goals and strategies at many levels.  Where does this leave the MDL/Reinforcement Learning approach?

Graham, A.K., 1980. Parameter estimation in system dynamics modeling. In: Randers, J. (Ed.), Elements of the System Dynamics Method. Productivity Press, Cambridge, MA, pp. 143–161.

Posted in Adaptive Behavior - Tidbits | Comments Off on Many Smaller Error Measures Better Than A Single Overall Measure