cGA, Parallelism, Processes, and Erlang
Back in Fall 2006 I was lucky to be at the right place, at the right time. Kumara Sastry and David E. Goldberg were working to pulverize some preconceptions about how far you could scale genetic algorithms. As I said, I was lucky I could help the best I could. It turned out that the answer was pretty simple, as far as you want. The key to that result was, again, built on Georges Harik’s compact genetic algorithm. The results were published on a paper titled Toward routine billion-variable optimization using genetic algorithms if you are curious. Anyway, back on track. A few days ago, I was playing with Erlang and I coded, just for fun, [yet another cGA implementation, now in Erlang]({{ ref “/posts/yet-another-cga-implementation-now-in-erlang.md” >}} yet another cGA implementation, now in Erlang"). The code was pretty straight forward, so why not take another crack at it and write an Erlang version that uses some of the ideas we used on that paper. The idea we used on the paper was simple. Slice the probabilistic model into smaller segments and update all those model fragments in parallel. The only caveat, if you go over the cGA model, is that you need the evaluation of two individuals to decide which way to update the model. Also, you need to know when to stop, or when your global model has converged. The flow is pretty simple: ...