Listening for tremors in granite samples, and collecting and analyzing that data just got easier and faster for the University of Toronto‘s civil engineering department – thanks to high-performance computing clusters (HPCCs).
The department’s rock fracture dynamics facility conducts research around how rock might behave when subjected to pressure, liquids, and changes in temperature.
The research is applicable to understanding the structure and stability of buildings during an earthquake, as well as bridges, dams, and mines, explained Paul Ruppert, director of strategic research systems at the University of Toronto’s civil engineering department on Wednesday.
“The more we can understand the nature of the rock that you’re digging through, or drilling through, the better we can predict whether or not that structure is going to fail,” he said.
The facility collects an enormous amount of data, specifically 400MB per second. “So we fill up our 6TB array of data in about four hours.”
The problem is when they want to perform observations over a period of weeks to see the impact of prolonged pressure, said Ruppert.
While they were able to observe such events prior to installing HPCC, he said, the events were not only limited in number given the amount of data that had to be collected, but data analysis couldn’t be performed in real time.
“Up till now, the best we’ve been able to do is grab the data stream and then do post processing because we didn’t have that kind of processing power available,” said Ruppert.
The facility’s HPCC is configured with 64 Dell PowerEdge 1950 2-socket servers equipped with 64-bit Dual-Core Intel Xeon processors, for a total of 256 processing cores. Running on Red Hat Linux and Microsoft operating systems, the cluster that’s based on Windows Server 2003, provides 18.9 terabytes of disk storage and 320 Gigabytes of overall memory.
Dell Canada provided in-kind funding that was matched financially by Ottawa, Ont.-based Canada Foundation for Innovation (CFI), a government agency that invests in research infrastructure.
Besides assisting a customer, Dell’s contribution to this research also benefits the community as a whole, said the company’s Debora Jensen, vice-president of Canadian advanced systems group.
With cluster computing, researchers can do near real time testing (it’s about 6 seconds behind real time) on samples, and actually visualize what’s going on inside the rock, said Ruppert. “The better model we have, the better we can predict what’s going on.”
He said graduate students are “running code that takes one day to execute on a regular desktop computer. That can cut almost a year off their masters program for example by doing it 256 times on the cluster.”
Dell’s involvement in the HPCC space goes beyond merely providing customers with technology and services, said the company’s director of enterprise solutions, Reza Rooholamini. “We are using HPCC to assess technologies and architectures to help us realize this vision of a data centre of the future.”
That data centre of the future, he said, will be one where scalability, manageability, and efficient utilization are non-issues – and those desired attributes can be tested in an HPCC environment.
Actually, part of Dell’s ‘scalability strategy’ is to promote scaling out over scaling up when it comes to expanding IT infrastructure to meet mounting demands, said the company’s director of enterprise solutions, Reza Rooholamini.
With scaling out, a company can capitalize on existing investments in infrastructure, and not fear interrupted service if one machine goes down because “with the scale out model, comes redundancy.”
HPCC may give the University of Toronto better real-time data analysis, but it’s not the end of the road just yet, said Ruppert. “Now we’re starting to see that we do have the computing capability to monitor in real time, but still can’t do it economically because we can’t put a supercomputer on every bridge in the world.”
Comment: [email protected]