Three years ago, Carnegie Mellon University and a group of 18 IT vendors and users, including FedEx Corp., Microsoft Corp., NASA, Oracle Corp. and Pfizer Inc., formed the Sustainable Computing Consortium in an effort to improve software quality and reliability. In 2003, the consortium became part of Carnegie Mellon"s CyLab initiative, which was formally launched late that year.
CyLab now involves more than 200 faculty members, students and researchers at the Pittsburgh-based university. In addition to sustainable computing, CyLab is working on IT issues such as device security, data privacy and the development of self-healing systems and networks. Pradeep Khosla, co-director of CyLab and dean of the Carnegie Institute of Technology, discussed the organization"s research efforts in an interview with Computerworld this month.
What happened to the Sustainable Computing Consortium? We were the bigger umbrella that absorbed it. The type of work that we were doing subsumed all the work that the Sustainable Computing Consortium was doing. We have an initiative in software assurance.
But do you have more of an IT security focus than the SCC did? Actually, that"s what people think, but the real focus is next-generation IT. It means systems that are measurable, available, secure, sustainable and trustworthy.
What are your goals for project deliverables? All of our research is divided into "thrusts." There is a thrust on resilient and self-healing systems. Is that about security? No. But it is highly related to security, because if you build a system that is resilient or self-healing, some of these security issues and ramifications go away. We have a thrust on user authentication and access control; we have thrusts on (topics such as) data and information privacy, threat prediction modeling and business economics.
How is that different from what IBM, for instance, is doing with autonomic computing? It doesn"t differ with respect to the goals. But it differs in the approach we take. We typically tackle problems that are higher risk.
How far are you from proving a concept? We have a demonstration system working for secure storage. We are now expanding that to what we call self-security, self-healing, self-analyzing. For example, if you look at the current router and switch technology, there is no way to trace a packet back to the source. If you start an attack, and even if I trace it back to your computer -- first of all, there is no way, but let"s assume there is a way -- you can say, "It was not me working on it, it was somebody else."
Now imagine there are biometrics on this computer, where you"re being authenticated all the time, so I can come back and say, "It was not only your computer that started this for sure, but you were working on this when it happened." To trace packets, you have to think about what the networking (and) routing infrastructure will look like. So we have developed a coding scheme where we can take an existing infrastructure, put code there, and it has the ability to track packets. Right now, it exists in a lab, but in the next three to five years, it"s going to be everywhere.
You said the group"s backers will be meeting in April to set the agenda for next year. What do you think will be some of the new items? One agenda item may be malicious code detection. How do you detect that?
Wasn"t that one of the goals of the SCC? Their goal was to reduce the number of bugs. Their thesis was that bugs create security holes. There is nothing wrong with that premise; it"s just a very narrow premise, because you can have no bugs and have malicious code.