Telecommunications - Massachusetts Institute of...

27

Transcript of Telecommunications - Massachusetts Institute of...

MIT Industrial Liaison Program July 2009 | Page 2

Telecommunications: Security Issues; Policy Standards; Management This report by MIT’s Industrial Liaison Program identifies selected MIT research in the areas of telecommunications security issues, policy standards, and management. For more information, please contact MIT’s Industrial Liaison Program at +1-617-253-2691.

DEPARTMENTS, GROUPS AND LABORATORIES ............................................................................................... 3 CENTER FOR INFORMATION SECURITY AND PRIVACY (CISP) ........................................................................................ 3 CENTER FOR TECHNOLOGY, POLICY, AND INDUSTRIAL DEVELOPMENT (CTPID)........................................................ 3 CENTER FOR TRANSPORTATION AND LOGISTICS (CTL) ................................................................................................. 3 COMMUNICATIONS FUTURES PROGRAM (CFP) ............................................................................................................... 4 COMPUTER SCIENCE AND ARTIFICIAL INTELLIGENCE LABORATORY (CSAIL)............................................................. 4 CRYPTOGRAPHY AND INFORMATION SECURITY (CIS) GROUP....................................................................................... 5 DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE....................................................................... 5 ISENET: INFRASTRUCTURE FOR SECURE EMBEDDED NETWORKS................................................................................. 6 SECURITY OF CRYPTOAPIS - THE CAMBRIDGE-MIT INSTITUTE (CMI) ....................................................................... 7

FACULTY RESEARCH PROJECTS ............................................................................................................................ 8 PROF. HARI BALAKRISHNAN ............................................................................................................................................ 8

Protecting SSH from known_hosts Address Harvesting ............................................................................................ 8 Real-Time Anomaly Detection..................................................................................................................................... 9

DR. DAVID CLARK .......................................................................................................................................................... 10 Overlay Networks: Peter Pan or Viable Networks of Future.................................................................................. 10 Communications Research Network (CRN) ............................................................................................................. 11 Future Internet Design (FIND) at CSAIL................................................................................................................. 12

PROF. SRINIVAS DEVADAS ............................................................................................................................................. 13 Securing Shared Untrusted Storage by Using TPM 1.2 Without Requiring a Trusted OS ................................... 14

PROF. SHAFRIRA GOLDWASSER ..................................................................................................................................... 15 On Best-Possible Obfuscation ................................................................................................................................... 15

PROF. RICHARD C. LARSON ........................................................................................................................................... 17 Center for Engineering Systems Fundamentals (CESF) ......................................................................................... 17

PROF. ROBERT MILLER ................................................................................................................................................... 18 Web Wallet: Preventing Phishing Attacks by Revealing User Intentions............................................................... 18

PROF. RONALD L. RIVEST .............................................................................................................................................. 20 Security of CryptoAPIs............................................................................................................................................... 21 Securely Obfuscating Re-Encryption ........................................................................................................................ 21 Cascadable and Commutative Cryptography........................................................................................................... 23 Practical Group Signatures Without Random Oracles............................................................................................ 24

DR. KAREN R. SOLLINS ................................................................................................................................................... 24 Designing for Internet Management: The Knowledge Plane .................................................................................. 25

MIT Industrial Liaison Program July 2009 | Page 3

DEPARTMENTS, GROUPS AND LABORATORIES

CENTER FOR INFORMATION SECURITY AND PRIVACY (CISP) Principal Investigator: Ronald L Rivest http://cisp.csail.mit.edu/ CISP's mission is to conduct breakthrough, long-term research in information security and privacy. CISP addresses both fundamental problems that our society is presently facing such as Internet security, as well as new security challenges in emerging computing environments, such as the millions of embedded networked devices that are coming online. CISP’s goal is to develop both the theoretical foundation for secure systems as well as to engineer practical systems.

CENTER FOR TECHNOLOGY, POLICY, AND INDUSTRIAL DEVELOPMENT (CTPID) Acting Director: Joel Moses http://web.mit.edu/ctpid/www/ The Center for Technology, Policy, and Industrial Development (CTPID) is building productive partnerships between academia, government, and industry to support global economic growth and to advance policies that preserve the environment and benefit society at large. CTPID's mission is to develop new knowledge, advanced technological strategies, and innovative partnerships that address global industrial and policy issues and to provide an enriched environment for MIT faculty and students to pursue their intellectual interests.

CENTER FOR TRANSPORTATION AND LOGISTICS (CTL) Director: Yossi Sheffi http://ctl.mit.edu/ The Center for Transportation & Logistics is part of the Engineering Systems Division in the School of Engineering. The center is widely recognized as an international leader in the field of transportation and logistics. Along with basic contributions to the understanding of transportation system planning, operations and management, its efforts include significant contributions to logistics modeling and supply chain management for shippers; to technology and policy analysis for government; and to management, planning and operations for trucking, railroad, air and ocean carriers. In addition to administering the Master of Engineering in Logistics program, the Center helps coordinate the extensive transportation and logistics research conducted throughout MIT. At any given time, research efforts typically number over 100, ranging from modest projects involving a single faculty member and a few students to large-scale international programs involving scores of people and a full-time research staff. Over 50 faculty and staff are affiliated with the Center through participation in its education, research and outreach programs. The interchange of information, ideas and inspiration among the faculty, students and research staff makes it one of the most dynamic focal points of activity in the transportation and logistics field.

MIT Industrial Liaison Program July 2009 | Page 4

COMMUNICATIONS FUTURES PROGRAM (CFP) Principal Investigator: David D Clark http://cfp.mit.edu/ The Communications Futures Program (CFP) is a partnership between university and industry at the forefront of defining the roadmap for communications and its impact on adjacent industries. CFP’s mission is to help our industry partners recognize the opportunities and threats from these changes by understanding the drivers and pace of change, building technologies that create discontinuous innovation and building the enablers for such innovation to be meaningful to partners… CFP believes that while the role of technology in industry transformation is important, equally important business drivers in the communications industry can accelerate or slow this process. Drivers include widespread availability of broadband infrastructure, role of regulation, role for enabling capabilities such as privacy and security, and effective business models and rights management for companies to profit from. Other drivers include alignment across the communications value chain for speedy rollout of new services. CFP is focused on four important issues: (1) Invent technologies that create discontinuous innovation; (2) Create enablers of industry transformation around broadband infrastructure, regulation, privacy and security, edge core dynamics, rights management, and others; (3) Align members across the communications value chain to speed innovation; and (4) Develop awareness around big opportunities from emerging technologies. CFP’s working group structure allows industry participants to be engaged closely with faculty in the research and to provide valuable input into the direction of the program. Working groups are chaired by faculty and industry sponsors. Working groups are fluid and are launched as new issues emerge and disbanded as issues become less relevant. Initial working groups are: (1) Technologies That Create Discontinuous Innovation, (2) Last Mile Broadband Infrastructure, (3) Security and Privacy, and (4) Edge Core Dynamics: Business Models and Technologies. Working groups allow industry participants to be engaged closely with faculty in the research and to provide valuable input into the direction of the program. Working groups are chaired by faculty and industry sponsors. Working groups are fluid and are launched as new issues emerge and disbanded as issues become less relevant. Leaders of each working group plan their working group meetings to suit the needs of the issues they address. Several working groups meet every two weeks over conference calls, others meet once a month, and yet others meet face to face several times a year.

COMPUTER SCIENCE AND ARTIFICIAL INTELLIGENCE LABORATORY (CSAIL) Director: Victor W Zue http://www.csail.mit.edu/index.html The MIT Computer Science and Artificial Intelligence Laboratory, or CSAIL, was formed on July 1st, 2003 by the merger of the Artificial Intelligence Lab and the Laboratory for Computer

MIT Industrial Liaison Program July 2009 | Page 5

Science, each with four decades of rich history. In so doing, the merger created the largest interdepartmental laboratory on campus, with nearly eight hundred members and ninety-plus principal investigators. CSAIL members come from seven academic departments -- Electrical Engineering and Computer Science, Mathematics, Aeronautics and Astronautics, Brain and Cognitive Science, Mechanical Engineering, Earth, Atmospheric and Planetary Sciences, and the Harvard-MIT Division of Health Sciences and Technology. CSAIL is also the home of the World Wide Web Consortium. The primary mission of CSAIL is research in both computer science and artificial intelligence, broadly construed. It is organized into three directorates: * Artificial Intelligence: This area of research aims to understand and develop systems - living and artificial - capable of intelligent reasoning, perception and behavior. Specific research areas include: classical AI, computational biology, computer graphics, computer vision, human language technology, machine learning, medical informatics, robotics, and the semantic web. * Systems: This area of research aims to discover common principles, models, metrics, and tools of computer systems, both hardware and software. Specific research areas include compilers, computer architecture and chip design, operating systems, programming languages, and computer networks. * Theory: This area of research studies the mathematics of computation and its consequences. Specific research areas include algorithms, complexity theory, computational geometry, cryptography, distributed computing, information security, and quantum computing. Much of the research at CSAIL is done as projects by faculty members working with undergraduate and graduate students, post-doctoral fellows, plus research staff. Sometimes a large number of PIs collaborate on lab-wide projects that cut across directorates, such as the Oxygen Project or the T-Party Project.

CRYPTOGRAPHY AND INFORMATION SECURITY (CIS) GROUP Principal Investigator: Shafrira Goldwasser http://groups.csail.mit.edu/cis/ The Cryptography and Information Security (CIS) Group at the MIT Laboratory for Computer Science seeks to develop techniques for securing tomorrow's global information infrastructure by exploring theoretical foundations, near-term practical applications, and long-range speculative research. The group aims to understand the theoretical power of cryptography and the practical engineering of secure information systems, from appropriate definitions and proofs of security, through cryptographic algorithm and protocol design, to implementations of real applications with easy-to-use security features. Further areas of interest include the relationship of this field to others such as complexity theory, quantum computing, algorithms, machine learning, and cryptographic policy debates.

DEPARTMENT OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Head: W Eric L Grimson http://www-eecs.mit.edu/ The primary mission of the department is the education of its students. Its three undergraduate programs traditionally have attracted over 30% of all MIT undergraduates, and its doctoral

MIT Industrial Liaison Program July 2009 | Page 6

programs are highly ranked and selective. A leader in cooperative education, the department has operated the highly successful VI-A Internship Program since 1917. In 1993 it established a five-year Master of Engineering program, under which undergraduate students stay for a fifth year and receive simultaneously a Bachelor's degree and a Master of Engineering degree. The five-year curriculum is structured and seamless across the traditional boundary between undergraduate and graduate student, and seamless across the traditional disciplines of Electrical Engineering and Computer Science. The mission of the Electrical Engineering and Computer Science Department is to produce graduates who are capable of taking a leadership position in the broad aspects of electrical engineering and computer science. Given the breadth of faculty within EECS, the department pursues a wide range of research topics. These range from theoretical computer science, computer systems and architecture, graphics, robotics, computer vision, machine learning, computational applications in medicine, computational biology, communications, information theory, control systems, large scale systems analysis, circuits, devices, power and energy, numerical computing, novel materials for devices, nanotechnology, manufacturing, biotechnology, speech and hearing, prosthetic devices, analog and hybrid circuits and devices, and many, many more.

ISENET: INFRASTRUCTURE FOR SECURE EMBEDDED NETWORKS Principal Investigator: Hari Balakrishnan http://cisp.csail.mit.edu/ISenet.html As computing devices are becoming ubiquitous, two contradictory trends are appearing. On one hand embedded computing elements and sensors are becoming disseminated and unsupervised. Critical elements of our national infrastructure for power distribution, traffic management and catastrophe detection are becoming dependent on computing devices that are largely unsupervised and unprotected. On the other hand, the cost and repercussions of security breaches is increasing as we place more responsibility on the devices that surround us. Worse, these devices, unlike Internet hosts, are exposed to more challenging kinds of attacks, such as physical and environmental attacks. In the event of such attacks, our society does not have strong guarantees of the infrastructure functioning, even at a reduced capacity. Fortunately, these computing environments are still in their infancy and thus there is an opportunity to design embedded networks with security treated as a first-class citizen. It would be a shame if, years from now, NSF had to fund work to attempt the repair of security holes in pervasive computing platforms or sensor networks as NSF is doing for the Internet today. Worse, these security holes may wreak damage at a scale hitherto unforeseen by society. We propose to build an infrastructure for securing embedded networks that we call ISENet (pronounced ice-net). ISENet secures devices deployed in untrustworthy or hostile environments so that they can withstand the more challenging attacks. To design, build and evaluate this infrastructure we will conduct research in six different areas: tamper-resistant hardware, OS security, network security, cryptography, privacy and policy, and application deployment. The

MIT Industrial Liaison Program July 2009 | Page 7

results of ISENet will include fundamental contributions to the six areas as well as a prototype that leverages advances in each area. The ISENet prototype consists of a new hardware platform that reliably detects physical tampering, an operating system that makes it easy to secure untrusted applications, ad-hoc wireless networks that will deliver messages as long there is a single physical path that is not controlled by the attacker, and redundant inexpensive sensors to handle environmental attacks. We will deploy the ISENet prototype in the application domains of automobile traffic networks and roof-top sensor networks. Because in these domains applications must operate continuously, the prototype will designed to be highly reliable, upgradeable in the field, and easy to use. The advances in the six areas are important by themselves but will help the prototype as follows. Tamper-resistant hardware will protect against physical attacks in addition to software attacks. ISENet's operating system will allow users to control and isolate applications using encapsulation without having to understand the applications' security properties. ISENet's network architecture will leverage the safety properties provided by the hardware and OS to guarantee liveness for a wide range of attacks. We will develop a theory for physical security and physical obfuscation to prove security properties for ISENet. Finally, privacy policies will be articulated and mechanisms developed based on the transparency of our deployed systems -- the systems will expose privacy concerns and technologies to users allowing them to modify their behavior.

SECURITY OF CRYPTOAPIS - THE CAMBRIDGE-MIT INSTITUTE (CMI) Principal Investigator: Ronald L Rivest http://www.cambridge-mit.org/project/home/default.aspx?objid=2227 Cryptoprocessors hold the secret keys used to work out and verify PINs. These Personal Identification Numbers are essential in controlling and accessing several services in this digital age. This research on the security of Cryptographic Application Programming Interfaces (CryptoAPIs) showed that the algorithms these devices use to encipher data are generally sound. However, the ways in which they are used, generally, are not. The research team – led by Dr Ross Anderson (University of Cambridge) and Dr Ronald Rivest (MIT) – established systematic ways of both identifying attacks on financial cryptographic systems and repairing them. During the course of the project, researchers also found API attacks on mobile phone systems, as well as on systems used for electronic ticketing. The research team were the first to understand the vulnerabilities of “chip-and-PIN” technology. Therefore, when fraud started to emerge, they were able to explain what was happening to both the victims and the media. Without this information, many victims would have had no recourse to their banks. The team found exploitable vulnerabilities in at least one version of almost every cryptographic processor they were able to examine. In the end, this research led to a much better understanding of API security. The attacks identified by the research team compelled widespread product redesign by all the major vendors of cryptographic processors, including large-scale companies such as IBM. Old designs that had accumulated vulnerabilities over the years suddenly had to be cleaned up. The team’s findings have led to a thorough redesign of existing product ranges and to a more careful and systematic design of such systems in future.

MIT Industrial Liaison Program July 2009 | Page 8

FACULTY RESEARCH PROJECTS

PROF. HARI BALAKRISHNAN Professor of Computer Science and Engineering Research Director, Systems (CSAIL) Head, Networks and Mobile Systems Group (CSAIL) http://nms.lcs.mit.edu/~hari http://www.csail.mit.edu/user/881 http://www.rle.mit.edu/cwn/ Hari Balakrishnan’s research interests are in networked computer systems; his recent and current projects include rcc (verifiable Internet routing), MONET (a multi-homed overlay network for improving network availability), IRIS (distributed hash table protocols for resolving "flat" names, such as Chord, and systems such as SFR and DOA based on "flat" names), Cricket (an accurate indoor location system, now commercially available), CarTel (a sensor computing system for automotive applications), Spam-I-am (spam control using quotas), and Medusa/Borealis (data stream processing; research partly commercialized at StreamBase Systems).

Protecting SSH from known_hosts Address Harvesting Address harvesting is the act of searching a compromised host for addresses of other hosts to attack. SSH, the tool of choice for administering and communicating with mission-critical hosts, security-critical hosts, and even some routers, leaves each user's list of previously contacted hosts open to harvest by anyone who compromises the user's account. Attackers have combined address harvesting with myriad mechanisms to impersonate legitimate users to authenticate to SSH. They have succeeded in breaching systems at major academic, commercial, and government institutions. In this study, we detail the threat posed should attackers automate this mode of attack to create a self-propagating worm. We then present a countermeasure to defend against address harvesting attacks, with an implementation written for OpenSSH. If you use SSH, your ssh client stores within your home directory a list that maps the host names and IP addresses of every remote host you have connected to with each host's public key. This database, known as known_hosts file, has been used by attackers who compromise user accounts, steal passwords and identity keys, and then use the list of hosts to identify targets on which the same password or key can be used to compromise additional accounts. It is also possible that worms could use known_hosts data to identify new targets. As of [September 12, 2005], we have collected known_hosts data from 179 hosts, 69 of which ran the script as root and submitted data from all user accounts. In total, we received 37,771 anonymized known_hosts entries from user accounts. These known_hosts entries lead to a total of 12,041 on 107 valid /8 networks (67% of all valid /8 networks). The data collection script that was run on these hosts also parsed SSH2 identity key files to see what what fraction of these key files had the encryption flag set. We were quite surprised to see that only 38.3% of 447 key files were encrypted.

MIT Industrial Liaison Program July 2009 | Page 9

Real-Time Anomaly Detection Attackers routinely scan the IP address space of a target network to seek out vulnerable hosts that they can exploit. One of the challenges is the difficulty in defining a portscan activity. How to perform portscan (i.e. the scanning rate and the coverage of the IP address) is entirely up to each scanner; therefore, the scanner can evade any detection algorithm that depends on the parameters that are under its control. However, in principle, the access pattern of port-scanning can be quite different from that of other legitimate activities. Since port scanners have little knowledge of the configuration of a target network (they would not have to scan the network otherwise), their access pattern often includes non-existent hosts or hosts that do not have the requested service running. On the contrary, there is little reason for legitimate users to initiate connection requests to inactive servers. Based on this observation, we formulate a detection problem that provides a basis on an online algorithm. For more detailed treatment of these bounds and the evaluation of the detection algorithm using real network traces. Worm Detection and Throttling: A worm is a program containing malicious code that spreads from host to host without human intervention. One instance of such software malcode, a scanning worm, vastly probes a set of randomly chosen IP addresses to locate vulnerable servers that it wish to infect. Analogous to the previous portscan detection problem, this random scanning behavior can be used to identify an infected machine that is engaged in worm propagation. While using sequential hypothesis testing has promise for detecting scanning worms, there is a significant hurdle to overcome. In [2], we discuss problems at length and present two innovations that enable us to develop a fast detection of scanning worms (*) Reverse Sequential Hypothesis Testing -- It is important to design a detection algorithm so that it promptly reacts when a benign host is infected and becomes a worm. To address this, we evaluate the likelihood ratio in reverse chronological order of the observations. In [2], we show that the reverse sequential hypothesis testing is equivalent to the forward sequential hypothesis testing that sets the lower threshold to 1-e, resets the likelihood ratio to 1 and repeats the test when the likelihood ratio crosses the lower threshold and terminates the test only when the likelihood ratio becomes greater than equal to the upper threshold. (*) Credit-Based Connection Rate Limiting -- It is necessary to limit the rate at which first contact connections can be initiated in order to ensure that worms cannot propagate rapidly between the moment scanning begins and time at which the scan's first-contact connections are timed out and observed to be failures by our reverse sequential hypothesis test. Credit-based connection rate limiting works by granting each local host, i, a starting balance of ten credits (Ci = 10) which can be used for issuing first-contact connection requests. Whenever a first-contact connection request is observed, a credit is subtracted from the sending host's balance (Ci = Ci-1). If the successful acknowledgment of a first-contact connection is observed, the host that initiated the request is issued two additional credits (Ci = Ci+2). No action is taken when connections fail, as the cost of issuing a first-contact connection has already been deducted from the issuing host's balance. Finally, first-contact connections are blocked if the host does not have any credit available. Research Agenda: (1) Understanding anomalous network activities: In many problem domains, we lack good models for anomalous network activities that capture unique characteristics useful for distinguishing them from benign ones. To address this, we intend to take traces from many vantage points and to look for patterns that can be incorporated into a model.

MIT Industrial Liaison Program July 2009 | Page 10

(2) Algorithms resilient to evasion: When designing detection algorithms in network security, one must be concerning with adversaries who can craft an attack to evade detection once the algorithm is publicized. The barrier should be high enough to resist evasion. (3) Evaluation: Detection algorithms must be evaluated both analytically and through trace-driven simulation and false positive and false negative cases should be analyzed. Also, estimating the amount of states required to run the algorithm is important since real-time detection of network anomalies often requires monitoring high-bandwidth networks. (4) Extension to distributed detection systems: We also plan on extending this work to distributed real-time detection problems. For instance, Internet-scale worm propagation can be better identified if detection systems are deployed over many places to cover various vantage points. Coordination among distributed detection systems should be one of the key design components.

DR. DAVID CLARK Senior Research Scientist Director, MIT Internet & Telecoms Convergence Consortium (ITC) http://ana.lcs.mit.edu/ http://www.csail.mit.edu/user/1526 David Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Laboratory, where he has worked since receiving his Ph.D. there in 1973. Since the mid 70s, Dr. Clark has been leading the development of the Internet; from 1981-1989 he acted as Chief Protocol Architect in this development, and chaired the Internet Activities Board. More recent activities include extensions to the Internet to support real-time traffic, pricing and related economic issues, and policy issues surrounding the Internet, such as broadband local loop deployment. His current research looks at re-definition of the architectural underpinnings of the Internet, and the relation of technology and architecture to economic, societal and policy considerations.

Overlay Networks: Peter Pan or Viable Networks of Future The goal of this project is to better understand the architectural, industrial organization and regulatory impacts of new class of networks called Overlay Networks. Example of overlay networks we consider in this project include routing overlays (e.g. RON, SureRoute), Content Distribution Networks (both propriety systems such as Akamai, as well as P2P systems) and security overlays (e.g. Onion routing, Freenet). Abstractly, an overlay network is a set of servers deployed across the Internet that a) provide some sort of infrastructure to one (or possibly several) applications, b) in some way take responsibility for the forwarding and handling of application data in ways that are different from or in competition with what is part of the basic Internet, and c) are operated in an organized and coherent way by third parties (which may include collections of end-users) to provide a well-understood service that is infrastructure-like. The evolution of overlays is not that dissimilar to the evolution of the Internet itself. The Internet started out as a government-funded research network running on top of the Public Switched Telecommunications Network (PSTN). It was a data application, mostly unregulated, that was supported on top of the public-utility regulated telephone networks. The Internet was an "overlay" that complemented the underlying basic infrastructure of the PSTN by adding new

MIT Industrial Liaison Program July 2009 | Page 11

functionality (packet-switched data network) to support the special needs of the research community (peer-to-peer computer communications). Most of the incremental investment in routers, servers, and access devices (PCs) was undertaken by new types of providers (Internet Service Providers or ISPs) and by end-users (Customer Premise Equipment or CPE) to complement the PSTN basic infrastructure already in place. With the commercialization of the Internet in the 1980s and its emergence as a mass market platform for broadband communications in the 1990s, the Internet has evolved into the principal platform for our global public communications infrastructure. Increasingly, IP packet transport is providing the basic transport medium for telephony and other multimedia applications (voice, video, and data). What was an "overlay" application has itself now become basic infrastructure. Over time, the traditional PSTN providers have come to play a larger role in managing the infrastructure and investment required to support the Internet. Deregulation, market growth, and innovation have resulted in a more complex and interdependent Internet infrastructure ecosystem. The success of the Internet owes much to the interoperability and connectivity supported by ubiquitous adoption of the IP protocols and adherence to the "end-to-end" (e2e) design principles that have governed Internet architecture for so long. However, the Internet's success has also posed significant problems. With growth has come heterogeneous services (not everyone needs or wants the same capabilities); new needs and requirements (support for real-time services or enhanced security); and complexity and size issues (arising from the sheer magnitude of today's Internet measured in terms of traffic and connectivity). To meet these challenges, the Internet needs to continue to evolve. In a process that looks at times like history repeating itself, the Internet is now spawning its own collection of "overlay" networks. There are many types and examples of overlays that arise to meet a range of purposes and needs. The emergence of these overlays raises interesting questions for the future of Internet architecture and the role of the Internet as a common platform for global communications. For example, are these "overlays" precursors to the future architecture of the Internet? Or, are they nasty barnacles on the Internet that threaten the e2e connectivity and interoperability that have proven to be such a key aspect of the Internet's value? What are the implications of overlays for industry structure and for the regulation of our public communications infrastructure? Will such networks will remain mainly an ingenious engineering artifact, failing to "grow up", or are they viable applications in light of technical, economic and regulatory forces. Open questions include understanding what constitutes an overlay, the motivations for their deployment and use, and the potential conflicts and tensions that may arise among stakeholders. The goal of this project is to frame such a discussion and provide further thought on the implications of overlays for Internet architecture, industry structure/business strategy, and public policy. Our initial analysis demonstrates that the policy questions raised by overlays are multifaceted and diverse.

Communications Research Network (CRN) …The Cambridge-MIT institute has recognised the importance of the communications sector to the UK economy and also that there is no single organisation which acts as a forum for the industry as a whole in the UK. Therefore it created the Communications Research Network to bring the industry together with a focus on research and how the industry might evolve in the future.

MIT Industrial Liaison Program July 2009 | Page 12

The CRN is led by Prof Jon Crowcroft of Cambridge University Computer Laboratory and by Prof David Clark of the MIT Computer Science and Artificial Intelligence Laboratory. It has collaborated closely with the Communications Futures Programme which was established at MIT and supported by the Cambridge-MIT Institute. In addition to Cambridge and MIT, the CRN also has principal investigators from UCL – Prof Mark Handley and Oxford University – Prof Helen Margetts. In late 2004 it was joined by Dr David Cleevely, founder of Analysys Ltd, as Chairman. The CRN has had major investment and involvement from BT as a leading industrial partner and has had significant involvement and participation from more than 100 other companies. The CRN has a number of distinct activities which it has pursued since its foundation: (*) a programme of research looking at fundamental technologies which are key to the development of the communications industry such as future concepts in wireless networking, future-resilient network architectures, control of Internet congestion by pricing, Internet -mediated interactions and photonics in communications networks (*) a programme of events which bring together leading thinkers from business and academia to debate key issues for the industry such as the future direction of spectrum management, the future of wireless technology, innovation in telecoms, and the role of telecoms in applications such a transport and healthcare. (*) convening working groups where experts from industry and academia work regularly on specific areas of interest to the future of the industry. Working groups include: Critical Infrastructure Protection, Innovation in Telecoms, Interconnection, Spectrum Policy and Technology and Telecoms for Transport. The Future In April 2006, CRN Limited was founded as a company limited by Guarantee. CRN Limited was founded with two shareholders: BT and CMI. Subsequent founder members will join them as shareholders. It also has set up a membership scheme in which there are four categories of member: (*) founder members drive CRN research and share ownership of the intellectual property created. (*) associate members participate in events and Working Groups, and enjoy privileged access to the knowledge generated by the research teams. (*) academic membership scheme encourages pioneering researchers around the world to participate in the work of the CRN. (*) We maintain strong links with a wide community of organisations through the observer membership scheme and strategic partnerships with other networks.

Future Internet Design (FIND) at CSAIL While the Internet of today, initially conceived over 30 years ago, has been stunningly successful, it also has a number of major limitations. It has persistent security problems, which many years of effort have not mitigated. Its industrial structure raises issues of investment and innovation, and the current plans of the service providers threaten the core architecture that is the basis of its success. The current Internet may not be suited for the computing environment of 10 years from now, which will not be PCs and servers but embedded processors, sensors and actuators. Issues such as these suggest that there is value in re-examining the Internet, not to change what it does but how it does it.

MIT Industrial Liaison Program July 2009 | Page 13

The Advanced Network Architecture group has, for several years, been concerned with the question of how a future Internet might be structured. In the recent past, the ANA group was part of a DARPA-funded project called NewArch, which was a multi-site collaboration to consider what an Internet of tomorrow might be if we could design it from scratch today knowing what we now know. This project, while intellectually successful, (see the project final report for a summary), did not result in an actual design for a new Internet. More recently, the National Science Foundation has put forward an ambitious research agenda: a challenge to the research community to envision what a global network of 10 or 15 years from now should be, and to propose the research that would get us there. This program will involve research teams from a number of universities, and will have the goal of generating coherent, integrated architectural proposals for a new Internet. We have been involved in the shaping of this program, and I currently have an agreement with the NSF to act as Architecture and Outreach Coordinator for the project, helping to bring intellectual coherence to the work being done on by the various funded institutions across the U.S. The key to the successful redesign of the Internet is not technological innovation, but the realization that the Internet is shaped today by social, economic and policy forces. By recognizing this reality and responding to it, we can increase the utility and relevance of a future Internet, and improve the chances that our enhancements will be adapted. But most technologists are not trained to model these forces, or to design systems that respond to them. Our work thus begins with a multi-discipline conversation about design in a social space, with the goal of finding suitable design principles. It turns out that many parts of the Internet have been designed with features that respond to these issues, but the process of design has been intuitive, and often times both the goal and the response are not explicitly articulated. As a part of our work, we are attempting to develop explicit design tools to shape the economic and social experience on the Internet. We are now in the second year of NSF funding for this project. The results of the first yeard include a study of social design principles, a summary of requirements for a future Internet, with an emphasis on security, manageability and economic viability, an initial catalog of new design approaches in these areas, and a number of talks for NSF as part of the launch of their new initiative. We have also participated in the development of a plan for a major NSF program to develop and deploy an experimental infrastructure to allow the testing and deployment of the results from the FIND research, a program called GENI. This work continues, and more effort is now going into the planning for cross-institution collaboration meetings.

PROF. SRINIVAS DEVADAS Professor of Electrical Engineering and Computer Science Associate Department Head, Electrical Engineering and Computer Science (EECS) http://people.csail.mit.edu/devadas/ http://www.csail.mit.edu/user/792 Devadas's research interests include VLSI design, computer-aided design, computer architecture, hardware validation, network router hardware, computer security, and computational biology. One recent project he was involved with was building Aegis, a secure hardware processor. Currently, Devadas is working on trusted virtual computation and secure virtual storage as part of

MIT Industrial Liaison Program July 2009 | Page 14

the Quanta T-Party project at MIT. He is also working on developing methods for protein structure prediction using machine learning and energy minimization techniques. Devadas served as the chair of Area II (Computer Science Graduate Program) from June 2003 to November 2005, and as the Research Director of Architecture, Systems and Networking within CSAIL from September 2003 to October 2005. Currently, he serve as Associate Head of the Department of Electrical Engineering and Computer Science.

Securing Shared Untrusted Storage by Using TPM 1.2 Without Requiring a Trusted OS We address the problem of using an untrusted server with a trusted platform module (TPM) to provide trusted storage for a large number of clients, where each client may own and use several different devices that may be offline at different times and may not be able to communicate with each other except through the untrusted server (over an untrusted network). The clients only trust the server's TPM; the server's BIOS, CPU, and OS are not assumed to be trusted. We show how the currently available TPM 1.2 technology can be used to implement tamper-evident storage, where clients are guaranteed to at least detect illegitimate modifications to their data (including replay attacks) whenever they wish to perform a critical operation that relies on the freshness and validity of the data. In particular, we introduce and analyze a log-based scheme in which the built-in monotonic counter of a TPM 1.2 chip is used to securely implement a large number of virtual monotonic counters, which can then be used to time-stamp data and provide tamper-evident storage without relying on a trusted BIOS, CPU, or OS. Providing tamper-tolerant storage, which guarantees that a client can continue to retrieve its original data even after a malicious attack is provided by using data replication on top of the tamper-evident storage system. The functionality provided by the techniques is highly relevant today as computing becomes increasingly mobile and pervasive. More and more users today regularly use several independent computing devices -- such as a desktop at home, a laptop while traveling, a mobile phone, and another desktop at work -- each of which may be offline or disconnected from the other devices at different times. If such a user wanted to make her data available to all her devices wherever she goes, one solution would be to employ a third party online storage service (such as Amazon S3 or others) to store her data. At present, however, most (if not all) such third party online storage services require a high level of trust in the service provider and its servers, including the software running on these servers, and the administrators of these servers. The techniques significantly reduce this requirement by only requiring that the user trust in the TPM 1.2 chips on the storage servers, without needing to trust the servers' BIOS, CPU, OS, and administrators. Aside from giving the user more security when using mainstream online storage services, this new ability would also enable a user to potentially make use of machines owned by ordinary users, such as in a peer-to-peer network. As long as these other users' machines have a certified working TPM 1.2 chip, a user need not trust the owner of these machines, or the software running on these machines.

MIT Industrial Liaison Program July 2009 | Page 15

PROF. SHAFRIRA GOLDWASSER RSA Professor of Computer Science and Engineering http://theory.lcs.mit.edu/~shafi/ http://www.csail.mit.edu/user/733 Shafi Goldwasser is the RSA Professor of Electrical Engineering and Computer Science in MIT, a co-leader of the cryptography and information security group and a member of the complexity theory group within the Theory of Computation Group and the Laboratory for Computer Science. Shafi Goldwasser’s focus is on cryptographic algorithms, pseudo-random number generation, digital signatures and identification, protocols (two party and multi-party) and zero knowledge. As a member of the Cryptography and Information Security (CIS) group she seeks to develop techniques for securing tomorrow's global information infrastructure by exploring theoretical foundations, near-term practical applications, and long-range speculative research. The goal is to understand the theoretical power of cryptography and the practical engineering of secure information systems, from appropriate definitions and proofs of security, through cryptographic algorithm and protocol design, to implementations of real applications with easy-to-use security features. The CIS group is also interested in the relationship of their field to others, such as complexity theory, quantum computing, algorithms, machine learning, and cryptographic policy debates. In addition to research, Professor Goldwasser teaches courses in Computability, Languages, Complexity and Cryptography and Cryptoanalysis.

On Best-Possible Obfuscation An obfuscator is a compiler that transforms any program (which we view as a boolean circuit) into an obfuscated program (also a circuit) that has the same input-output functionality as the original program, but is otherwise “unintelligible”. Obfuscation has applications for cryptography and for software protection. A theoretical study of obfuscation, which focused on black-box obfuscation, was initiated where the obfuscated circuit should leak no information except for its (black-box) input-output functionality. A family of functionalities that cannot be obfuscated was demonstrated. Subsequent research has showed further negative results as well as positive results for obfuscating very specific families of circuits, all with respect to black box obfuscation. We study of a new notion of obfuscation, which we call best-possible obfuscation. Best possible obfuscation makes the relaxed requirement that the obfuscated program leaks as little information as any other program with the same functionality (and of similar size). In particular, this definition allows the program to leak non black-box information. Best-possible obfuscation guarantees that any information that is not hidden by the obfuscated program is also not hidden by any other similar-size program computing the same functionality, and thus the obfuscation is (literally) the best possible. We study best-possible obfuscation and its relationship to previously studied definitions. Instead of requiring that an obfuscator strip a program of any non black-box information, we require only that the (best-possible) obfuscated program leak as little information as possible. Namely, the obfuscated program should be ``as private as'' any other program computing the same functionality (and of a certain size). A best-possible obfuscator should transform any program so that anything that can be computed given access to the obfuscated program should

MIT Industrial Liaison Program July 2009 | Page 16

also be computable from any other equivalent program (of some related size). A best-possible obfuscation may leak non black-box information (e.g. the code of a hard-to-learn function), as long as whatever it leaks is efficiently learnable from any other similar-size circuit computing the same functionality. While this relaxed notion of obfuscation gives no absolute guarantee about what information is hidden in the obfuscated program, it does guarantee (literally) that the obfuscated code is the best possible. It is thus a meaningful notion of obfuscation, especially when we consider that programs are obfuscated every day in the real world without any provable security guarantee. We study how best-possible obfuscation relates to black-box obfuscation. Best-possible obfuscation is a relaxed requirement that departs from the black-box paradigm of previous work. We first observe that any black-box obfuscator is also a best-possible obfuscator. Furthermore, we present a separation between the two notions of obfuscation. The separation result considers the complexity class of languages computable by polynomial sized ordered decision diagrams or POBDDs; these are log-space programs that can only read their input tape once, from left to right. We observe that any POBDD can be best-possible obfuscated as a POBDD, whereas there are many natural functions computable by POBDDs that provably cannot be black-box obfuscated as any POBDD. These two results give new possibility results (for best-possible and indistinguishability obfuscation), and simple natural impossibility results (for black-box obfuscation). Note that the impossibility result for black-box obfuscation only applies when we restrict the representation of the obfuscator's output to be a POBDD itself. We also compare the notions of best-possible and indistinguishability obfuscation. We show that any best-possible obfuscator is also an indistinguishability obfuscator. For efficient obfuscators the definitions are equivalent. For inefficient obfuscation, the difference between the two definitions is sharp, as inefficient information-theoretic indistinguishability obfuscators are easy to construct, but the existence of inefficient statistically best-possible obfuscators even for the class of languages recognizable by 3-CNF circuits implies that the polynomial hierarchy collapses to its second level. We explore the limits of best-possible obfuscation. We begin by considering information-theoretically (statistically) best-possible obfuscation, and show that if there exist (not necessarily efficient) statistically secure best-possible obfuscators for the simple circuit family of 3-CNF circuits, then the polynomial hierarchy collapses to its second level. We also consider best-possible obfuscation in the (programmable) random oracle model. In this model, circuits can be built using special random oracle gates that compute a completely random function. Previously, this model was considered by Lynn, Prabhakaran and Sahai as a promising setting for presenting positive results for obfuscation. We show that the random oracle can also be used to prove strong negative results for obfuscation, presenting a simple family of circuits with access to the random oracle, that are provably hard to be efficiently best-possible obfuscated. This impossibility results extends to the black-box and indistinguishability obfuscation notions. We note that using random oracles for obfuscation was originally motivated by the hope that giving circuits access to an idealized ``box'' computing a random function would make it easier to obfuscate more functionalities (and eventually perhaps the properties of the ``box'' could be realized by a software implementation). We, on the other hand, show that the existence of such boxes (or a software implementation with the idealized properties) could actually allow the construction of circuits that are impossible to obfuscate. Although this negative result does not

MIT Industrial Liaison Program July 2009 | Page 17

rule out that every circuit without random oracle gates can be best-possible obfuscated, we believe it is illuminating for two reasons. First, as a warning sign when considering obfuscation in the random oracle model, and secondly as its proof hints that achieving general purpose best-possible obfuscation in the standard model would require a significant.

PROF. RICHARD C. LARSON Mitsui Professor in Problems of Contemporary Technology Professor of Engineering Systems and Civil and Environmental Engineering Director, Center for Engineering Systems Fundamentals (CESF) http://esd.mit.edu/Faculty_Pages/larson/larson.htm The majority of Dr. Larson’s career has focused on operations research as applied to services industries. He is author, co-author or editor of six books and author of over 75 scientific articles, primarily in the fields of technology-enabled education, urban service systems (esp. emergency response systems), queueing, logistics and workforce planning. Dr. Larson's research on queues has not only resulted in new computational techniques (e.g., the Queue Inference Engine and the Hypercube Queueing Model), but has also been covered extensively in national media (e.g., ABC TV's 20/20). Currently he is founding Director of MIT's new Center for Engineering Systems Fundamentals.

Center for Engineering Systems Fundamentals (CESF) Engineering systems is an emerging field that is transforming the ways engineers are educated and work and the way research is conducted. Engineering systems involve large complex systems whose properties are determined not only by technology, but also by people’s behavior, plus the laws of physics and other natural sciences. These elements combine to produce a key component of engineering systems: its socio-technical nature. Examples of the socio-technical engineering systems include: (*) Distance-learning systems that deliver quality educational content to remote communities in developing countries; (*) A preparedness and response system for a possible influenza pandemic, as well as for hurricanes and other natural disasters; (*) A technology-enabled system for casting votes in national and local elections; and (*) Dynamic pricing systems for use of critical infrastructures, such as electric power, intra- and inter-city transportation and telecommunications. Often the “physics’ of these systems are poorly understood because the dynamic equations are determined by a wide range of diverse factors. Many of these factors are cultural, psychological, and otherwise behavioral. Moreover, even the natural science of these socio-technical systems may not be understood because they may exhibit “emergent behavior,” as when a virulent strain of influenza virus suddenly and unpredictably mutates into an even more powerful and deadly variation. CESF researchers seek to understand these systems at the fundamental level. They do this not by postulating interesting theorems at the blackboard, but by hands-on work with real socio-technical systems that, independently of CESF, need deep engineering analysis to assist their planners and managers in moving forward in desired directions. Researchers in the related disciplines of Operations Research and Systems Control have developed most of these fields’

MIT Industrial Liaison Program July 2009 | Page 18

fundamental tenets by working with real systems having important problems requiring analysis, and then generalizing the solutions found to higher levels of abstraction. CESF researchers will also use this approach. CESF embraces problems operating at the Venn diagram intersection of ‘traditional engineering,’ management (broadly interpreted) and social science. This is a recurring theme throughout ESD. Once these systems are better understood, we are interested in developing systems designs that improve certain system properties such as robustness, resilience, reliability and operational transparency. CESF researchers seek to identify and extract fundamental concepts, methodologies and formalisms that will eventually define the new field called Engineering Systems. The process requires research on real problems, working at the intersection of engineering, management and social science. And, in the end, we are engineers, wanting to design and build systems, but with an awareness of the complexities and multi-faceted nature of such systems.

PROF. ROBERT MILLER NBX Career Development Associate Professor of Computer Science and Engineering http://people.csail.mit.edu/rcm/ http://www.csail.mit.edu/user/698 Rob Miller is an assistant professor in the MIT EECS department and a member of the Computer Science and Artificial Intelligence Laboratory. His research interests span human-computer interaction, user interfaces, software engineering, and artificial intelligence. His current research focus lies at the intersection of security and user interfaces, with the goal of discovering how to build computer systems that are both safer and easier to use.

Web Wallet: Preventing Phishing Attacks by Revealing User Intentions Phishing has become a significant threat to Internet users. Phishing attacks typically use legitimate-looking but fake emails and websites to deceive users into disclosing private information to the attacker. Most phishing attacks trick users into submitting their personal information online using a web form at a spoofed website. Even though using a web form to submit sensitive information is a common practice used by most legitimate sites, web forms have several problems that make phishing attacks effective and hard to prevent. First, the appearance of a web site and its web forms are easy to spoof. A web site can control what it looks like in a user's browser, so a site's appearance cannot reliably reflect the site's true identity. But users tend to decide site identity based on appearance. As a result, users may be tricked into submitting data to fraudulent sites. Second, web forms are used for submitting insensitive data as well as sensitive data. That is, the same kind of web form used to submit search keywords in one site is used to submit credit card information in another site. The semantic meaning of the input data is opaque to the browser. The browser fails to give different protections according to the sensitivity of the input data. Many proposed solutions to phishing use toolbars that show different types of security messages to help users to detect phishing sites. Users are also advised to look at the existing browser security indicators, e.g., the URL displayed in the address bar and the lock icon displayed in the

MIT Industrial Liaison Program July 2009 | Page 19

status bar when a connection is SSL-protected, to detect phishing sites. However, controlled user studies have shown that these security indicators are ineffective against high-quality phishing attacks. We have designed new software, called the Web Wallet, to prevent phishing attacks. The Web Wallet is based on the following two design principles: (1) Get the User's Intention: Phishing attacks exploit the gap between the way a user perceives a communication and the actual effect of the communication. The Web Wallet bridges the gap by helping the users transfer their real intention to the browser when they are submitting data to web sites. In this situation, the user's intention includes two parts: (1) Is the submitted data sensitive information or not? If so, what kind of sensitive information is it? And (2) Where is the data submitted? The Web Wallet is a browser sidebar for users to input sensitive information. (2) Integrate Security into the Workflow: When users are doing tasks online, security is not their main concern. Therefore, effective security mechanisms should integrate themselves into the user's current workflow. The Web Wallet requires users to use it by disabling the corresponding sensitive input fields in web forms and making itself the only affordance for input. Moreover, the Web Wallet makes users explicitly acknowledge and indicate their intended site. When a user sees a web form requesting sensitive information, she presses a dedicated security key on the keyboard to open the Web Wallet. Using the Web Wallet, she may type her information or retrieve her stored information. The information is then filled into the web form. Before the fill-in, the Web Wallet always checks if the current site is good enough to receive the sensitive information. If not, the Web Wallet requires the user to explicitly indicate where she wants the data to go by showing her a list of sites, including the current site, and lets her choose the site that she wants to submit the data to. If the user's intended site is not the current site (which probably indicates phishing), the Web Wallet shows a warning to the user about this discrepancy, and gives her a safe path to her intended site. To correctly use the Web Wallet, users only need to remember one simple rule: “Always use the Web Wallet to submit sensitive information by pressing the security key first.” The hidden negative statement of the same rule is that “never submit sensitive information directly through the web form because it is not a secure practice.” We have run a user study to test the Web Wallet interface. Each subject was told to act as the personal assistant of John Smith. John Smith forwarded 20 emails to her in random order and asked her to go to 20 different web sites, log in with his password, and add items to his wish list. Five of the 20 forwarded emails were attacks, with links directing the subject to simulated phishing web sites. Phishing attacks were simulated by connecting to the real web site but changing the address bar to display a different hostname (indicating that the web page was from an unusual source). Results are promising: (*) The Web Wallet significantly decreased the spoof rate of current phishing attacks from 63% to 7%. (*) All the simulated phishing attacks in the study were effectively prevented by the Web Wallet as long as it was used. With the modified Web Wallet interface, only one out of 18 attacks

MIT Industrial Liaison Program July 2009 | Page 20

successfully tricked a subject, and we already know how to improve the interface to protect the user in that situation. (*) By disabling direct input into web forms and thus making itself the only way to fill sensitive information into forms at legitimate sites, the Web Wallet successfully trained a majority of the subjects to depend on it for sensitive information submission. But there are also negative results which we plan to deal with in future research: (*) The subjects totally failed to differentiate the authentic Web Wallet interface from a fake Web Wallet presented by a phishing site. This is another type of phishing attack. Instead of mimicking a legitimate site's appearance, the attacker simulates the interface of security software run by users. (*) It is not easy to completely stop all the subjects from typing sensitive information directly into web forms. Users are familiar with web form input and have a strong tendency to use it. We might be able to help address this problem by explaining the benefits of the Web Wallet to users in order to encourage them to break their habit of using web forms directly. We see many ways to improve the design of the Web Wallet. For example, the Web Wallet should not only support login using an existing password, but also other password-related activities, such as registering a new account and changing the password of an existing account. The Web Wallet should also protect other types of the sensitive data. The full-featured Web Wallet should by default protect credit card information, bank account information and personal identity information as well. Eventually, the Web Wallet should be able to protect any personal data defined by P3P. Clear and detailed explanations should be added to the Web Wallet interface in order to help users better understand the purpose of the Web Wallet and to correctly use it, i.e., open it whenever necessary. We also have to find a solution to prevent the Web Wallet itself from being spoofed. We plan to use image recognition techniques to detect the presence of a fake Web Wallet.

PROF. RONALD L. RIVEST Andrew (1956) and Erna Viterbi Professor of Computer Science and Engineering http://theory.lcs.mit.edu/~rivest/ http://www.csail.mit.edu/user/1294 Ronald L. Rivest is the Viterbi Professor of Computer Science at MIT, and a leader of the Cryptography and Information Security research group within MIT's Computer Science and Artificial Intelligence Laboratory. Professor Rivest is an inventor of the RSA public-key cryptosystem, and a founder of RSA Data Security. He has extensive experience in cryptographic design and cryptanalysis, and has published numerous papers in these areas. He has served a Director of the International Association for Cryptologic Research, the organizing body for the Eurocrypt and Crypto conferences, and of the Financial Cryptography Association. He has also worked extensively in the areas of computer algorithms, machine learning, and VLSI design.

MIT Industrial Liaison Program July 2009 | Page 21

Security of CryptoAPIs Cryptoprocessors hold the secret keys used to work out and verify PINs. These Personal Identification Numbers are essential in controlling and accessing several services in this digital age. This research on the security of Cryptographic Application Programming Interfaces (CryptoAPIs) showed that the algorithms these devices use to encipher data are generally sound. However, the ways in which they are used, generally, are not. Fraud at ATMs and point-of-sale systems is often a result of criminals gaining the knowledge and capability to exploit weak technical protection. One significant cause of this problem is the poor design of financial cryptography systems. In the UK, additional issues such as ineffective regulation and a lack of consumer protection have led to periodic waves of fraud. The research team established systematic ways of both identifying attacks on financial cryptographic systems and repairing them. During the course of the project, researchers also found API attacks on mobile phone systems, as well as on systems used for electronic ticketing. The research team were the first to understand the vulnerabilities of “chip-and-PIN” technology. Therefore, when fraud started to emerge, they were able to explain what was happening to both the victims and the media. Without this information, many victims would have had no recourse to their banks. The team found exploitable vulnerabilities in at least one version of almost every cryptographic processor they were able to examine. In the end, this research led to a much better understanding of API security. The attacks identified by the research team compelled widespread product redesign by all the major vendors of cryptographic processors, including large-scale companies such as IBM. Old designs that had accumulated vulnerabilities over the years suddenly had to be cleaned up. The team’s findings have led to a thorough redesign of existing product ranges and to a more careful and systematic design of such systems in future.

Securely Obfuscating Re-Encryption We present the first positive obfuscation result for a traditional cryptographic functionality. This positive result stands in contrast to well-known negative impossibility results for general obfuscation and recent negative impossibility and improbability results for obfuscation of many cryptographic functionalities. Whereas other positive obfuscation results in the standard model apply to very simple point functions, the obfuscation result applies to the significantly more complicated and widely-used re-encryption functionality. This functionality takes a ciphertext for message m encrypted under Alice's public key and transforms it into a ciphertext for the same message m under Bob's public key. To overcome impossibility results and to make the results meaningful for cryptographic functionalities, we use a new definition of obfuscation. This new definition incorporates more security-aware provisions.

MIT Industrial Liaison Program July 2009 | Page 22

A recent line of research in theoretical cryptography aims to understand whether it is possible to obfuscate programs so that a program's code becomes unintelligible while its functionality remains unchanged. A general method for obfuscating programs would lead to the solution of many open problems in cryptography. Unfortunately, Barak et al. show that for many notions of obfuscation, a general program obfuscator does not exist---i.e., they exhibit a class of circuits which cannot be obfuscated. A subsequent work of Goldwasser and Kalai shows the impossibility and improbability of obfuscating more natural functionalities. In spite of these negative results for general-purpose obfuscation, there are a few positive obfuscation results for simple functionalities such as point functions. Canetti shows that under a very strong Diffie-Hellman assumption point functions can be obfuscated. Further work of Canetti, Micciancio and Reingold, Wee and Dodis and Smith both relaxes the assumptions required for obfuscation and considers other (related) functionalities. Despite these positive results, obfuscators for traditional cryptographic functionalities (such as those that deal with encryption) have remained elusive. In this work, we present the first obfuscator for a more traditional cryptographic functionality. Namely, we show how to obfuscate a family of circuits implementing a re-encryption functionality. A re-encryption program for Alice and Bob takes a ciphertext for a message m encrypted under Alice's public key, and transforms it into a ciphertext for the same message m under Bob's public key. Re-encryption programs have many practical applications such as the iTunes DRM system, secure distributed file servers and secure email forwarding. The straightforward method to implement re-encryption is to write a program P which decrypts the input ciphertext using Alice's secret key and then encrypts the resulting message with Bob's public key. When P is run by Alice, this is a good solution. In the practical applications noted above, however, the re-encryption program is executed by a third-party. When this is the case, the straightforward implementation has serious security problems since P's code may reveal Alice's secret key to the third party. A better solution is to design an obfuscator for the re-encryption program P. That is, we would like that a third party who has a re-encryption program learns no more from the re-encryption program than it does from interaction with a black-box oracle that provides the same functionality. While several re-encryption schemes have been proposed before, none of these prior works satisfy the strong obfuscation requirement informally stated above. The main technical contribution is the construction of a novel re-encryption scheme which meets this strong notion while remaining surprisingly practical. As a side note, in this construction, ciphertexts that are re-encrypted from Alice to Bob cannot be further re-encrypted from Bob to Carol. This may be a limitation in some scenarios, but it is nonetheless sufficient for the important practical applications noted above. The main contribution is an obfuscation for a family of circuits implementing a re-encryption functionality. Moreover, the security of the obfuscation is proved under a (black-box) definition that also guarantees its security for the cryptographic applications mentioned above. This is in contrast to previous work that provided both negative and positive results under predicate-based definitions that do not provide a meaningful security guarantee when the obfuscated program is used as part of a larger cryptographic system. Intuitively, while the predicate black-box property

MIT Industrial Liaison Program July 2009 | Page 23

gives a quantifiable guarantee that some information (namely, predicates) about the program is hidden by the obfuscated circuit, other “non-black-box information” may still leak. Moreover, this leaked information might compromise the security of a cryptographic scheme which uses the obfuscated circuit. The definition of obfuscation that we use both sidesteps impossibility results by considering randomized functionalities, and is more meaningful for cryptographic applications than definitions of obfuscation used in previous work

Cascadable and Commutative Cryptography Performing a "cascade" of sequential cryptographic operations is an intuitive idea used in many applications. For instance, triple-DES (3DES) encryption performs three sequential DES encryption operations under different keys. Public-key encryption operations are often cascaded as well. For example, messages are sequentially encrypted under different public keys in both mix networks for electronic voting and privacy-enhancing onion routing. Cascadable cryptosystems are a particular type of multiple encryption system. Multiple encryption systems encrypt data under several keys, possibly under different underlying cryptosystems. In general, a multiple cryptosystem does not necessarily specify the order of decryption operations or define intermediate states of partial decryption. The only requirement is that all keys used to encrypt a message must also be used to decrypt it. In cascadable cryptosystems, one may arbitrarily encrypt an existing ciphertext, or decrypt a ciphertext with a valid key. (Which keys are “valid” for a ciphertext will depend on the specific cryptosystem.) Our research focuses on cascadable cryptosystems involving a single underlying encryption operator, as opposed to several independent encryption operators. Unfortunately, standard formal security definitions do not fully capture adversarial abilities in these settings. Adversaries performing chosen plaintext or chosen ciphertext attacks may be able to obtain encryptions or decryptions under a cascade of operations, rather than a single operation. Adversaries may also be able to distinguish ciphertexts by the history of operations that produced them. To address the absence of appropriate security definitions, we formally define cascadable semantic security and introduce a new notion of historical security. The most basic cascadable cryptosystem has intuitive properties: one must decrypt in the opposite order of encryption operations. A plaintext encrypted under a sequence of keys x, y, denoted as c = ey(ex (m)), must be decrypted under the corresponding keys in reverse order, i.e. m = dx(dy(c)). For convenience, we will omit parentheses and represent the result of this sequence of encryption operations on a message mas the string eyex(m). However, other classes of cascadable cryptosystems may allow different decryption orders. One particularly useful and interesting class of cascadable cryptosystems exhibit commutative properties, where one may decrypt with any key that a ciphertext has already been encrypted with. A ciphertext c = eyex(m) may be decrypted as either m = dydx(c) or m = dxdy(c). Commutative cryptosystems, such as Pohlig-Hellman and Massey-Omura, have existed for over 25 years and are the basis of many proposed applications. For instance, Shamir, Rivest, and Adleman's classic "Mental Poker" and three-pass key exchange protocols both rely on commutativity. More recently, Agrawal, Evfimievski, and Sriakant, and Clifton et al. present data mining and private set intersection applications based on commutativity. To illustrate an application of commutativity, consider the three-pass key exchange protocol. In this protocol, Alice and Bob each have respective secret keys a and b. Alice wishes to share a secret s with Bob.

MIT Industrial Liaison Program July 2009 | Page 24

If Alice and Bob have a cascadable, commutative cryptosystem at their disposal, they can engage in the following protocol: Example: Three-Pass Key Exchange (1) Given secret key a and input s, Alice sends Bob ea(s). (2) Given secret key b, Bob sends Alice ebea(s)=eaeb(s). (3) Alice sends Bob daeaeb(s) = eb(s). (4) Bob computes dbeb(s) = s. Commutativity is just one of many possible properties exhibited by cascadable cryptosystems. To model these various flavors, our cascadable cryptosystem definition incorporates string rewrite systems. String rewrite systems model cryptographic operations as strings of symbols, capturing the interaction between various symbols with rewrite rules. Rewrite systems will be a useful and flexible tool that may model a wide variety of cryptosystems.

Practical Group Signatures Without Random Oracles We present the first constant-size group signature scheme that is provably secure in the standard model without the need for relaxed setup assumptions. The proof follows a new ideal/real-world definition of security for group signatures that encapsulates all the standard properties of unforgeability, anonymity, unlinkability, and exculpability. Security of our constructions require certain cryptographic assumptions, namely the Strong LRSW, EDH, and Strong SXDH assumptions. Evidence for the newly introduced assumptions is provided by proving them secure in the generic group model. The signatures are very short (independent of the number of group members), costing roughly 35 percent more bits than the shortest known group signatures *with* random oracles due to Boneh, Boyen, and Shacham.

DR. KAREN R. SOLLINS Principal Research Scientist Computer Science and Artificial Intelligence Laboratory http://www.csail.mit.edu/user/1511 The general area of Dr. Sollins's research is computer networking, with many years of focus on network support for distributed application. In that arena her work has concentrated on naming, security, and information infrastructure, with a particular focus on the design implications of the need for extreme longevity in such systems. In addition to publishing in these areas, she has worked on protocol design and has been deeply involved in the standards process of the Internet Engineering Task Force, as appropriate to her work. Her research funding sources include DARPA, NSF, Intel, and Cisco. Her current interests lie in network security and overall network architecture. Two specific examples of these are a current effort to address the problem of phishing in the context of the MIT-based consortium, the Communications Futures Program, and The Knowledge Plane, an effort to bring self-knowledge and self-diagnosis to the overall management of an Internet scaled network. More broadly, and in conjunction with one of her Ph.D. students she is also working on questions about the layering that is one of the core elements of the current design of the Internet.

MIT Industrial Liaison Program July 2009 | Page 25

She supports and works with both undergraduate and graduate students on a regular basis, teaches intermittently, and has been an undergraduate academic advisor for many years. She regularly participates in journal paper reviewing, program committee reviewing, NSF panel reviewing, as well as various other professional service activities. She is a member of the ACM, IEEE, and AAAS.

Designing for Internet Management: The Knowledge Plane The over-arching hypothesis under which we are proposing this work is that in the network of the future we must architect a network management plane that aids, assists, and for many of the more routine and increasingly burdensome or challenging tasks, either enhances or replaces humans. The contribution of this work is the realistic and improved organization of intelligence (applications) that comprise network management. The insight is that a common approach driven by both the design principles of the Internet in conjunction with more specialized constraints can derive productive organizational designs for such applications. In its most basic form, the problem we are addressing is the organization of the reasoning engines that are key to the intelligence required to make the network increasingly self-managed. The business of managing networks has become increasingly difficult, as network management is pushed into often very personal (home or body-net) environments and as network management issues are becoming increasingly global, crossing domains of responsibility. Considering the architecture of the current Internet, we find that scale, local autonomy, distribution, and a lack of global knowledge (exacerbated by the underlying end-to-end philosophy that discourages anything more than least functionality inside the net) place challenges on global network management. Adding to that the recognition that ''the net'' has become critically central to the functioning of society, economy, health and governmental structures, we find a need for a common approach to network management that spans the network and is designed to meet these extremely diverse needs. In the end (or the beginning) users are the drivers of the need for communication. Therefore they will also be the drivers for how effectively the resources they need will perform and be managed. Thus, consider for example, the user on a laptop who tries to browse the web and finds that a page will not load. That user must be able to contact an agent to begin a diagnosis of the problem, and that must start at home. The first question that must be answered is whether or not the laptop is connected to a network. Since that question must be answerable even when the answer is that it is not connected, there must be at least a small representative of the network management capability residing locally. The objective of this research is to gain insight into not only the local specialist, but the organization sets of such agents that together can address larger, more complex questions for which the answers may not be as simple. The Knowledge Plane (KP) was proposed by Clark et al. as a new dimension to a network architecture, contrasting with the data and control planes; its purpose is to provide knowledge and expertise to enable the network to be self-monitoring, self-analyzing, self-diagnosing, and self- maintaining or -improving. To achieve these goals a KP brings together widely distributed data collection, wide availability of that data, and sophisticated and adaptive processing or KP functions, within a unifying structure that brings order, meets the policy, scaling and functional requirements of a global network, and, ideally, creates synergy and exploits commonality of design patterns between the many possible KP functions. To design and build a system of this size and scope, we identify the following set of design requirements: scalability to address the size and scope of the Internet; efficiency to provide responsiveness to requests made of the KP; robustness, to enable the KP to continue to function as best possible, even under incorrect or

MIT Industrial Liaison Program July 2009 | Page 26

incomplete behavior of the network itself; non-intrusiveness, to keep the KP from impinging significantly on the resource usage intended for the customers of the network; local control, to support local networks and resources in their needs for privacy and other forms of local control, while enabling them to cooperate for mutual benefit in more effective network management. We identify three key lower level building blocks as a starting point: an information plane, structuring abstractions, and an ontology appropriate for reasoning. The information plane both is a repository for information gathered through measuring, monitoring, etc. as well as knowledge learned by inference and reasoning. In addition the information plane is tasked with supporting sharing and partial information including aggregation and dissemination while respecting the global decentralized nature of the Internet. The goal of structuring is to organize the intelligence or functions required for the self-management capabilities required by a global-scale network. The hypothesis is that a multi-level strategy that combines the strengths of local or specialized experts with higher level oversight, analysis and synthesis provides both effective partitioning of functionality and coordination among the components. We identify four key types of constraints necessary for organizing such functional components: (*) Function and use constraints: In order to achieve a particular function, the application may be conceived as a set of interacting components collocated with the knowledge necessary to success. It is possible that parts of the knowledge and functional subcomponents may be distributed, but there will be constraints from both the knowledge itself as well as not only the functional subcomponents but how they must interact that will necessarily frame the shape of the (possibly) distributed coordinated function or network management tool. (*) "Network" location: At least in some cases, often for purely technical reasons, the management of a network may by necessity be kept local to that network, where locality may be defined by a number of metrics. The simplest of these is a topological constraint. Additionally, more challenging ones include latency, bandwidth and other network metric based approaches. In this work we will begin with the simplest, but recognize that there are already services available that allow for some of the more dynamic and more challenging metrics. (*) Physical location: We separate this from the previous category because the issues of geographic location or perhaps administrative ownership boundaries are generally orthogonal to the more performance based network location constraints. (*) Policy and other external constraints: These constraints fall into three major categories, security, pricing, and other non-numeric incentives, such as social good, selfishness, and other social preferences. In this last area, one of the most challenging aspects of it is to understand and design for the competitive and generally non-cooperative nature of the society into which the networks are placed. At the same time, because these networks are providing shared resources they must be managed for the benefit of such a set of competitors. Thus, a key question is how to design the management applications to allow for the privacy, security, regulation, and other aspects of competition to flourish, while finding the common ground and ability to cooperate. This will require understanding not only the points at which cooperation are necessary, but as best possible positive incentives that will encourage cooperation. It is here that we depend the field of economics to both examine and explore possibilities for approaches.

MIT Industrial Liaison Program July 2009 | Page 27

Work is proceeding through repeated cycles for build or refine a prototype, apply it to increasingly challenging network management functions, and evaluate the effectiveness both in terms of the specific domain or function and in terms of the generality and extensibility of the framework for applicability to increasingly challenging problems. Future intended applications include extensions to the intrusion detection work, root cause fault diagnosis, DNS failure diagnosis, path performance, and routing.