[BDCSG2008] Data-Intensive Scalable Computing (Randy Bryant)

Randy opens fire reviewing models of parallelisms and how Google’s Mpa-Reduce model (the core of Yahoo’s Hadoop) is changing the picture. He is emphasizing how data is and integral part of the computational process (which has been greatly unregarded). Map-Reduce model can greatly help because of it fault tolerant capabilities. Now he is reviewing the two traditional parallel programming models (shared model and message-passing model) and how this differ from map-reduce (and how this increases the IO). Initiatives like Hadoop allow to cut-down cost for accessing large scale computing.