[BDCSG2008] Simplicity and Complexity in Data Systems (Garth Gibson)

Energy community and HPC. It is cheaper to collect a lot of samples and run simulations to decide where to drill (the extremely costly part). Energy community and HPC. It is cheaper to collect a lot of samples and run simulations to decide where to drill (the extremely costly part). Review of several effort one modeling for science making. They also run a collection of failures and maintenance cycles on hardware. Job interruptions becomes linear in number of chips (quite interesting result). Compute power on top 500 still doubles per year, and thus the failure rates. Lost disk becomes more and more painful (since the regeneration takes from 8h currently per terabyte to weeks in prediction). All this claims a change on the design. The approach is to spread data as much possible, and that can help to linearly reduce the reconstruction times. File systems are “object” file systems such as the Google FS and Hadoop FS. pNFS: scalable NFS soon. A key goal: reuse current tools people are using to speed up adaption and make it appealing to the users. ...

Mar 26, 2008 · 1 min · 181 words · Xavier Llorà

[BDCSG2008] Computational Paradigms for Genomic Medicine (Jill Mesirov)

Jilll is reviewing what is going with data and biology. There has been an explosion on the numbers they are generating data (from volumes to throughput). Simulations has also been common practices, robot operations, etc. more and more data. Some numbers, now their center use 4.8K processors and 1440+ Terabytes of storage. The challenge give the proper tools to biologist (not CS people). The two key topics of the talk: computation paradigms and computation foundations. They heavily rely on genome expression arrays (row patients, column genes, value expression values). A simple example: classify leukemias (example of how can be distinguished using expression arrays). Patient samples, extract messenger RNA and then create the expression signatures (high dimensionality low training sample set). They repeated the same problem for predicted outcome on prognosis on brain cancer, but for this program there was no strong signal to get them accurate enough. Genes work on regulatory networks (sets of genes), and they tried to do the analysis this way—acting as adding background knowledge to the problem—boosting the results and making the treatment possible. But, the problem is that there should be and infrastructure that could be easy to use and able to replicate experiments. Infrastructure should integrate and interact to components. Should be able to support techs and illiterates equally. Two interfaces (visual and programatic). Access to a library of functions, write pipes, language agnostic, and build on web services (scalable from the laptop to clusters). The name GenePattern. They are collaborating with Microsoft working on a tool (word document) to link to pipelines and the data in the data (can run with other version) and append results to the document too. ...

Mar 26, 2008 · 2 min · 277 words · Xavier Llorà

[BDCSG2008] Clouds and ManyCores: The Revolution (Dan Reed)

Dan Reed (former NCSA director now at Microsoft Research) continues with the meeting presentations. His elevator pitch: the infrastructure need to take into account applications and the user experience. Current trend is that monolithic data consolidation is crumbling under dispersion, changing the traditional picture. The flavors of big data can be explored along two dimensions: (regular/irregular) versus (structured/unstructured). He emphasizes on focusing more on the user experience with big data, and how you can manage resource at any given point. Cloud computing can help organically orchestrate this resources on demand. He also show some examples of Dryad (the Microsoft take on map-reduce architectures) and DryadLINQ. Another interesting comment: ...

Mar 26, 2008 · 1 min · 140 words · Xavier Llorà

[BDCSG2008] Text Information Management: Challenges and Oportunities (ChengXiang Zhai)

UIUC CS professor Zhai reviews texts information management. ChenXiang start reviewing the importance of text as a natural way to encode human knowledge. His main focus is how he can provide support for different usages of text information, and how they interact to models, applications, systems and algorithms. This allowed him to motivate future research directions on information retrieval. Some of his interesting words: Future research directions require improvements on IR and NLP (shallow: POS, partial parsing, fragmental semantic analysis), but it is fragile and domain oriented. Machine learning algorithms are still no scalable and not enough training data to satisfy the algorithm requirements. Data mining has lots of algorithms, but only for salient patterns. ...

Mar 26, 2008 · 2 min · 234 words · Xavier Llorà

Big Data Computing Study Group 2008

I am lucky to attend the Big Data Computing Study Group 2008. The line of speaker is impressive. The event is held at Yahoo! Sunnyvale, and Thomas Kwan (UIUC alumni know at Yahoo!) is helping organize it. I will keep blogging about it the rest of the day.

Mar 26, 2008 · 1 min · 48 words · Xavier Llorà