All the talks took place at the premises of FORTH
Sound, precise and efficient static race detection for multi-threaded programs - Abstract:
Multi-threaded programming is increasingly relevant due to the growing prevalence of multi-core processors. Unfortunately, the non-determinism in parallel processing makes it easy to make mistakes but difficult to detect them, so that writing concurrent programs is considered very difficult. A data race, which happens when two threads access the same memory location without synchronization is a common concurrency error, with potentially disastrous consequences.
We present Locksmith, a tool for automatically finding data races in multi-threaded C programs by analyzing their source code. Locksmith uses a collection of static analysis techniques to reason about program properties, including a novel effect system to compute memory locations that are shared between threads, a system for inferring "guarded-by" correlations between locks and memory locations, and a novel analysis of data structures using existential types. We present the main analyses in detail and give formal proofs to support their soundness. We discuss the implementation of each analysis in Locksmith, present the problems that arose when extending it to the full C programming language, and discuss some alternative solutions. We provide extensive measurements for the precision and performance of each analysis and compare alternative techniques to find the best combination.
Short bio:
Polyvios Pratikakis is a CNRS postdoctoral researcher at Verimag, Grenoble. He graduated from the Department of Electrical and Computer Engineering of the National Technical University of Athens in 2002. He received a Master's degree in Computer Science from the University of Maryland in 2004, and a Ph.D. in Computer Science in 2008. His research interests span programming languages, type systems, static analysis, concurrent software and mechanized proofs. He received the University of Maryland Dean's Fellowship award for student research in 2006.
Intrusion Detection in Ubiquitous Computing Technologies - Abstract:
Ubiquitous computing technologies have received great attention in recent years, mainly due to the evolution of wireless networking and mobile computing hardware. However, ubiquitous computing technologies including Wireless Ad hoc Networks and RFID (Radio Frequency Identification) systems suffer from a number of inherent vulnerabilities with grave security implications. In this presentation we will discuss possible research directions for intrusion detection and response in ubiquitous computing technologies, with emphasis on wireless ad hoc networks and RFID systems. The focus of this research is to detect and respond to attacks at an early stage.
Aikaterini Mitrokotsa is a postdoctoral researcher at the Faculty of Electrical Engineering, Mathematics and Computer Science of Delft University of Technology in the Netherlands. Formerly, she held a position as a visitor assistant professor in the Department of Computer Science at the Free University (Vrije Universiteit) in Amsterdam. In 2007, she received a Ph.D in Computer Science from the University of Piraeus.
Dr. Mitrokotsa's main research interests lie in the area of network security, intrusion detection systems, denial of service attacks, neurocomputing and machine learning applications to RFID, fixed, wireless ad hoc and sensor networks security. She has been active both in European and National research projects while recently she has been awarded the Rubicon Research Grant by the Netherlands Organization for Scientific Research (NWO).
Key management algorithms and distributed heuristics for optimizing the design of a secure group communications framework and for modeling routing dissemination. - Abstract:
In the first part of this talk, I will go over our methods on establishing a secure group communications framework for resource constrained, infrastructure-less networks, optimizing a number of metrics of interest, from the point of view of Key Management (KM). Our work focuses on the design of efficient, robust, group KM schemes, capable of distributed operation where key infrastructure components are absent or inaccessible, that accomplish the following: (a) better performance than this of existing schemes for similar environments, (b) successfully handle network dynamics and failures, in networks of large number of nodes. We present algorithms and protocols for handling membership changes, disruptions and failures with low overhead for initial key establishment or steady state. In an effort to reduce the suboptimal performance of protocols when executed without topological considerations, underlying routing is integrated into the design by the definition of topology oriented communication schedules, using novel lightweight heuristics. In the second part of the talk, we focus on improving the expected user performance metrics associated with flooding link state routing information. We compare existing schemes such as this of Multi-Point Relays with our new approach which is based on the use of Connected Dominating Set (CDSs). We develop asymptotic analytical bounds for the approaches under study, and conduct a comparative performance evaluation to determine under which conditions one approach is more suitable than another. We only introduce the Hexagon-based representative of our models - CDS-HEX - as it provides the lowest routing overhead among other properties. We describe our heuristics to generate CDS-HEX in a distributed fashion from a random placement of nodes, so that it simulates the described analytical model as close as possible.
Maria Striki is a senior research scientist in the department of "Advanced Technologies" at Telcordia Technologies Inc, New Jersey, USA, since February 2007. She received her Ph.D. and M.Sc. degrees in 2006 and 2004 respectively, from the Department of Electrical and Computer Engineering, at University of Maryland, College Park, USA. She received her Diploma in July 1999 from National Technical University of Athens (NTUA), Greece, with honors.
Her research interests include wireless communications and wireless, mobile, ad hoc, cellular networks, algorithms and optimization targeting distributed networks, routing, cross-layer design, security (with emphasis on the design of key generation, distribution, authentication, and secure routing protocols). Her Ph.D. research focused on the design and optimization of algorithms and techniques for secure communications in resource constrained networks. While serving at Telcordia Technologies, she has participated in a number of government and/or commercial projects on: (1) design and implementation of optimized routing protocols, under the Component Based Routing (CBR) theory consideration, (2) systems design, and (3) design and incorporation of IEC function in 802.21 to the existing NS-2 based framework of 802.16/11.
She is the recipient of the second Award in Telecommunications from Ericsson for her undergraduate work on Location Area Planning for Cellular Networks, in summer 2000. She is recipient of the Greek National Scholarship Foundation Award. She has publications in highly rated IEEE conferences, has been involved in the organization of conferences, and has been serving as a reviewer in a number of IEEE conferences and journals. She is an IEEE member, and a member of the Technical Chamber of Greece.
“Insider Threat Detection: Host and Network Monitoring Techniques” – Abstract:
The problem of insider threat is one of the most vexing problems for computer security research. We will present an overview of an ongoing collaborative project aimed at understanding human behavior and the insider threat. The organizations involved include Carnegie Mellon University, Columbia University, Cornell University, Dartmouth College, Indiana University, MITRE Corporation, Purdue University, and the RAND Corporation. Two primary objectives serve to focus and integrate the proposed research activities: technology exploration and environmental constraints. The first objective addresses the need for base technologies to monitor insider behavior, coupled with behavioral descriptions of suspicious, inappropriate or illegitimate events or activities. The second objective addresses the need for a methodological framework for handling incipient and actual insider behavior once it is recognized. In this talk we describe some of the ongoing research at Columbia that aims to develop technology and monitoring functions that will provide a lightweight, robust, and scalable event processing infrastructure that can be deployed in a range of at risk enterprises (e.g. the U.S. military, banks, chemical plants and refineries, and border and port security systems). Our work involves the implementation of host-based sensors that detect unusual user behavior indicative of insider attack. We present an overview of prior work on masquerade detection and our most recent work to incorporate context and infer intent to more accurately identify potential insider attack. We also detail our current work on network based decoy traffic and detection of misuse of honeytokens, purposely placed, realistic-looking decoy data designed to entice traitors into revealing their nefarious actions.
Salvatore J. Stolfo is Professor of Computer Science at Columbia University. He received his Ph.D. from NYU Courant Institute in 1979 and has been on the faculty of Columbia ever since. (See http://www.cs.columbia.edu/~sal). He has published well over 160 formal scientific papers in the areas of parallel computing, AI knowledge-based systems, data mining, computer security and intrusion and anomaly detection systems. His most recent research has been devoted to distributed data mining systems with applications to fraud and intrusion detection in network information systems. (See http://www.cs.columbia.edu/ids for complete details.) He has been awarded 15 patents in the areas of parallel computing and database inference, internet privacy, intrusion detection and computer security. He served as the Chairman of the Computer Science Department and the Director of the Center for Advanced Technology at Columbia University. He recently co-chaired several workshops in data mining, intrusion detection and the Digital Government and co-chaired the program committee of the ACM SIGKDD 2000 Conference and organized two recent workshops sponsored by NSF, ARO and the Department of the Treasury in the area of computer security and insider attack threats. He is a member of three editorial boards and a reviewer for many of the most prestigious journals in computer security, as well as a member of several program committees for the top conferences in the area. He was also an expert witness in the DOJ versus Microsoft "browser wars" case. He was a member the Congressional Internet Caucus Advisory Committee, and Visa 3D Secure Authenticated Internet Payments Vendor Program. He was a consultant to the CTO of Citicorp for several years, and helped organize the Financial Services Technology Consortium. He is a board member and treasurer of a private organization of Professionals for Cyber Defense. Recently, he has participated in a DARPA ISAT study, served as a consultant to the director of the DARPA IPTO office as a member of the DARPA Futures Panel and is a member of the National Academies National Research Council / Naval Studies Board (NSB) Committee on Information Assurance for Network- Centric Naval Forces.
“Distributed Brokering System for Private Virtual Cluster” – Abstract:
Private Virtual Cluster (PVC) is middleware project targeted to “Instant Grid”. It can be defined as combination of Grid, P2P and VPN approaches targeted to allow simple deployment of distributed applications over different administration domains. It is currently under development. The target system will be fault-tolerant, scalable, autonomous, and dynamic. PVC is actually a Peer To Peer (P2P) network serving Grid/Parallel application. It relies on two different entities: a software peer and a resource broker. After a general overview of PVC this presentation will show the design of the distributed brokering system. It performs a set of operation as the peer enrollment in the virtual network, the peer lookup and several others. Its design is based on a Distributed Hash Table (DHT) and an overview of the functionalities of DHTs will also be shown. The DHT is used as distributed data structure and another component is delegated to interact with the actual peer. A reference Java implementation of the broker is currently under debugging. The test phase will be performed on Grid5000. Finally, the immediate future works include the addition of a resources reservation system.
Roberto Podesta' was born in Genoa in 1977. He got a Ph.D. in Computer Science and Communication Technology from the University of Genoa in 2007. In the same university he served also as research assistant, lecturer, and associate researcher. During his activity he cooperated with the GRIDS Lab of the University of Melbourne (Australia) and with some industrial research division of enterprises as Telecom Italia and Marconi-Ericsson. In October 2007 he joined the Grand-large project at the INRIA section in Orsay (France) as research fellow supported by the European Research Consortium for Informatics and Mathematics (ERCIM). His main research interests include distributed and heterogeneous computing systems and technologies, and, in particular, Peer to Peer architectures, Grid-, and Autonomic- Computing.
“Email Information Flow in Large-Scale Enterprises” – Abstract:
In this talk, I will present analysis results of email communications in a large-scale enterprise network. Our study first focuses on understanding the social graph induced by email communications between individual users. Specifically, we examine how email communication flows are correlated with user profiles, the organization structure, and how outside information penetrates the enterprise. We then concentrate on understanding the information processing load imposed to users and the strategies applied by users in email triage. To the best of our knowledge, this is the first measurement study of email communications of a global enterprise network comprising email data from over 100,000 employees spread across multiple continents. Our analysis results inform the design of network applications that takes into account typical user behaviour in social interactions and solitary information processing. Our large-scale dataset further allows us to examine the validity of several hypotheses suggested by the social network theory.
Thomas Karagiannis is a researcher with the systems and networking group at Microsoft Research, Cambridge, UK. His research interests include Internet measurements and monitoring, analysis and modelling of Internet application traffic dynamics, social networking, peer-to-peer networks and home networks. He received his Ph.D at the Computer Science department of the University of California, Riverside under the supervision of Associate Professor Michalis Faloutsos. Before joining Microsoft Research, Thomas has been also with Intel Research in Cambridge, UK and the Cooperative Association for Internet Data Analysis (CAIDA). He received his B.S. at the department of Applied Informatics of the University of Macedonia in Thessaloniki, Greece.
“HomeMaestro: A framework for detecting and correcting contention in Home Networks” – Abstract:
In this talk, I will describe HomeMaestro, a distributed system for monitoring and instrumentation of home networks. By performing extensive measurements at the host level HomeMaestro infers application network requirements, and identifies network-related problems. By sharing and correlating information across hosts in the home network, HomeMaestro automatically detects and resolves contention over network resources among applications based on predefined policies. Finally, our system implements a distributed virtual queue to enforce those policies by prioritizing applications without additional assistance from network equipment such as routers or access points. In the talk, I will outline the challenges in managing home networks, describe the design choices and the architecture of our system, and highlight the performance og HomeMaestro components in typical home scenarios. (Joint work with Elias Athanasopoulos, Thomas Karagiannis, and Peter Key).
Christos is a researcher in the Systems and Networking Group in Microsoft Research, Cambridge, UK. He holds a Ph.D. from Georgia Institute of Technology, Atlanta, GA, USA, and a bachelors from University of Patras, Greece, both in computer science. He is interested in content distribution networks, peer-to-peer technologies, analysis and modelling of complex communication networks, and wireless mesh networking. Christos is a member of IEEE and ACM.
“Ten years in the evolution of the Internet ecosystem” – Abstract:
Our goal is to understand the evolution of the Autonomous System (AS) ecosystem over the last decade. Instead of focusing on abstract topological properties, we classify ASes into a number of "species"' depending on their function and business type. Further, we consider the semantics of inter-AS links, in terms of transit versus peering relations. After an exponential increase phase until 2001, the Internet now grows linearly in terms of both ASes and inter-AS links. The growth is mostly due to enterprise networks and content/access providers at the periphery of the Internet. The average path length remains almost constant mostly due to the increasing multihoming degree of transit and content/access providers. In recent years, enterprise networks prefer to connect to small transit providers, while content/access providers connect equally to both large and small transit providers. The AS species differ significantly from each other with respect to their rewiring activity; content/access providers are the most active. A few large transit providers act as "attractors" or "repellers" of customers. For many providers, strong attractiveness precedes strong repulsiveness by 3-9 months. Finally, in terms of regional growth, we find that the AS ecosystem is now larger and more dynamic in Europe than in North America.
Dr. Constantine Dovrolis is an Associate Professor at the College of Computing of the Georgia Institute of Technology. He received the Computer Engineering degree from the Technical University of Crete (Greece) in 1995, the M.S. degree from the University of Rochester in 1996, and the Ph.D. degree from the University of Wisconsin-Madison in 2000. He joined Georgia Tech in August 2002, after serving at the faculty of the University of Delaware for about two years. He has held visiting positions at Thomson Research in Paris, Simula in Oslo, and FORTH in Crete. His current research focuses primarily on the evolution of the Internet, intelligent route control mechanisms and performance-aware routing, automated performance problem diagnosis, and applications of network measurements.
“European Internet Situation Awareness: The Global View” – Abstract:
WOur vision is to generate a multidisciplinary continuous situation-awareness in a Private-Public-Partnership to provide all relevant stakeholders with their individual global view of the European Internet. To generate this "European Internet Situation Awareness" input from all relevant sources has to be used such as from different technical sensor networks, general global data, statistics, and information from the partners. All this information will be analyze and reports will be generated for the different stakeholder. This brings the different stakeholder in the position to make better decision.
Norbert Pohlmann studied Electrical Engineering, specialized in Computer Science, and has written his doctoral thesis on "Possibilities and Limitations of Firewall Systems" (1997-2001). From 1985 to 1988 he was employed as a research engineer in the Telematics Laboratory at Aachen University of Applied Sciences and worked on numerous research and development projects relating to the integration of security mechanisms into communications systems.In 1988 Norbert Pohlmann founded together with Professor Dr, Christoph Ruland the KryptoKom system house in Aachen. In July 1999, KryptoKom GmbH merged with Utimaco Safeware AG, Germany. Between 1999 and 2003 Norbert Pohlmann was a member of the Utimaco Safeware AG management board.Since September 1st 2003 Norbert Pohlmann is Professor in the Computer Science Department and since January 2005 he is director of the Institute for Internet Security at the University of Applied Sciences Gelsenkirchen. As an expert for IT security Norbert Pohlmann has played an active role in cryptography and secure applications since 1985.He is a founding member of the TeleTrusT association, which is devoted to the establishment of trusted IT networks, and has been a member of its board since 1994 and chairman of the board since April 1998. Numerous publications, lectures and seminars on the subject of information security testify to his expertise and commitment to this subject.In 1997 Norbert Pohlmann won the city of Aachen's Prize for Innovation and Technology. Norbert Pohlmann is one of the initiators of the "Information Security Solutions Europe" (ISSE) and the chairman of the ISSE program committee of the ISSE conference.
“Fast Flux Service Networks: Dynamics and Roles in Hosting Online Scam” – Abstract:
We studied the dynamics of fast flux service networks and their role in online scam hosting infrastructures. By monitoring changes in DNS records of over 350 distinct fast flux domains collected from URLs in 115,000 spam emails at a large spam sinkhole, we measure the rate of change of DNS records, accumulation of new distinct IPs in the hosting infrastructure, and location of change both for individual domains and across 21 different scam campaigns. We find that fast flux networks redirect clients at much different rates---and at different locations in the DNS hierarchy---than conventional load-balanced Web sites. We also find that the IP addresses in the fast flux infrastructure itself change rapidly, and that this infrastructure is shared extensively across scam campaigns, and some of these IP addresses are also used to send spam. Finally, we compared IP addresses in fast-flux infrastructure and flux domains with various blacklists (i.e., SBL, XBL/PBL, and URIBL) and found that nearly one-third of scam sites were not listed in the URL blacklist at the time they were hosting scams. We also observed many hosting sites and nameservers that were listed in both the SBL and XBL both before and after we observed fast-flux activity; these observations lend insight into both the responsiveness of existing blacklists and the life cycles of fast-flux nodes.
Maria Konte is currently an MS student, starting the PhD program this Fall, in the College of Computing at Georgia Tech. She received her diploma in Production Engineering and Management from Technical University of Crete and MS in Systems Engineering from Boston University, in 2003 and 2005 respectively. Her research interest are in the area of computer networks with an emphasis on network security.
“Preventing memory error exploits with WIT” – Abstract:
Attacks often exploit memory errors to gain control over the execution of vulnerable programs. These attacks remain a serious problem despite previous research on techniques to prevent them. We believe there are two reasons for this: techniques that are used to prevent these attacks fail to prevent many attacks; and most techniques are not used because they have high overhead or they require non-trivial changes to the source code or the language runtime. We present Write Integrity Testing (WIT), a new technique that provides practical protection from these attacks. We discuss an efficient implementation with optimizations to reduce space and time overhead. This implementation can be used in practice because it compiles C and C++ programs without modifications, it has high coverage with no false positives, and it has low overhead. WIT's average runtime overhead is only 7% aross a set of CPU intensive benchmarks and is negligible when IO is the bottleneck.
Periklis Akritidis is a second year PhD student in the University of Cambrige with the Systems Research Group under the supervision of Dr. Steven Hand. This is joint work with Cristian Cadar, Costin Raiciu, Manuel Costa and Miguel Castro from Microsoft Research.
“User-Centric Data Management” – Abstract:
In our increasingly interconnected and interdependent information society, the Quality of Service (QoS) and Quality of Data (QoD) experienced by the users determine the success or failure of any mission-critical, data-driven application. In this talk, we illustrate the trade-offs between QoS and QoD, and present algorithms to control this trade-off, while providing quality guarantees to the users. We use three different data management environments as our domain examples: (1) dynamic, database-driven web sites, (2) sensor networks, and (3) mission-critical, realtime database systems.We show that in all cases, users can benefit greatly by controlling the trade-off between QoS and QoD. Finally, we present Quality Contracts, a unifying framework for specifying user preferences over QoS and QoD.
Dr. Alexandros Labrinidis is an Assistant Professor of Computer Science at the University of Pittsburgh, and an adjunct Assistant Professor of Computer Science at Carnegie Mellon University. He received his PhD degree from the University of Maryland, College Park in 2002, and MSc and BSc degrees from the University of Crete, Greece in 1995 and 1993 respectively. He is currently the co-director of the Advanced Data Management Technologies Laboratory at the University of Pittsburgh. His research interests include user-centric data management, web-databases (with emphasis on Quality of Data and Quality of Service), data stream management systems, and scientific data management. Dr. Labrinidis is currently the Editor-in-Chief of ACM SIGMOD Record (since 2007). Since 2002, he has been the program committee co-chair for 5 workshops/conferences and has served on over 30 program committees of international conferences and workshops. More information can be found at http://www.cs.pitt.edu/~labrinid/
“Shared Atomic Objects” – Abstract:
Current generation of processors provides increased parallelism through shared memory rather than increased clock speed. The programming of shared memory machines, as well as the development of applications to run on top of them is nowadays highly desirable but it is a notoriously difficult task. The design of efficient shared objects that can be accessed simultaneously by many processes is a way to overcome this problem. Since distributed objects enable the communication between processes of a distributed system, they simplify the task of programming such systems and for this reason they usually lie at the heart of most distributed algorithms. The design of shared objects that are efficient enough to be practical is therefore an important research direction in distributed computing. This presentation focuses on recent research results on the design and analysis of fundamental shared atomic objects and distributed data structures. A collection of algorithms and lower bounds on the complexity of implementing such objects will be discussed.
Panagiota Fatourou is an assistant professor at the Department of Computer Science of the University of Ioannina, Greece. She got her B.Sc. degree from the Department of Computer Science of the University of Crete, Greece, in 1995 and her Ph.D. from the Department of Computer Engineering and Informatics of the University of Patras, Greece, in 1999. She has spent time as a postdoctoral fellow at Max-Planck Institut fuer Informatik, Germany, and at the University of Toronto, Canada. Her research interests include the design and analysis of algorithms (with emphasis to distributed algorithms), distributed computing and parallel computing.
Dr. Dimitrios Katsaros, University of Thessaly
“CDNs Content Outsourcing via Generalized Communities” – Abstract:
Content Distribution Networks (CDNs) balance costs and quality in services related to content delivery. Devising an efficient content outsourcing policy is crucial since, based on such policies, CDN providers can provide client-tailored content, improve performance, and result in significant economical gains. Earlier content outsourcing approaches may often prove ineffective since they drive prefetching decisions by assuming knowledge of content popularity statistics, which are not always available and are extremely volatile. This talk will describe a self-adaptive technique under a CDN framework on which outsourced content is identified with no a-priori knowledge of (earlier) request statistics. This is achieved by using a structure-based approach identifying coherent clusters of "correlated" Web server content objects, the so-called Web page communities. These communities are the core outsourcing unit. The talk will provide details about the algorithmic identification of these communities, and simulation experiments, which attests that this technique is robust and effective in reducing user-perceived latency as compared with competing approaches, i.e., two communities-based approaches, Web caching, and non-CDN.
Dimitrios Katsaros was born in Thetidio (Farsala), Greece in 1974. He received a BSc and a Ph.D. in Computer Science, both from the Aristotle University of Thessaloniki (AUTH), in 1997 and 2004, respectively. He spent two years (2005-2007) as a postdoc in AUTH; since 2005 he is a contracted lecturer at the Department of Computer and Communication Engineering, at the University of Thessaly, teaching courses on Programming Languages (Java, C++), Mobile and Pervasive Computing, and Web Information Retrieval. His research interests include the Web and Internet, social network analysis, mobile and pervasive computing, mobile ad hoc and wireless sensor networks, and information retrieval.
Dr. Dimitrios Tsoumakos, Computing Systems Lab, NTUA
“Search, Replication and Grouping for Unstructured P2P Networks” – Abstract:
Peer-to-Peer (P2P) networks are gaining increasing attention from both the scientific and the large Internet user community. Popular applications utilizing this new technology offer many attractive features to a growing number of users. P2P systems have two basic functions: Content search and dissemination. Search (or lookup) protocols define how participants locate remotely maintained resources. In data dissemination, users transmit or receive content from single or multiple sites in the network. P2P applications traditionally operate under purely decentralized and highly dynamic environments. Unstructured systems represent a particularly interesting class of P2P networks. Peers form an overlay in an ad-hoc manner, without any guarantees relative to lookup performance or content availability. Resources are locally maintained, while participants have limited knowledge, usually confined to their immediate neighborhood in the overlay. My work aims at providing effective and bandwidth-efficient searching and data sharing. We present a suite of algorithms which provide peers in unstructured P2P overlays with the state necessary in order to efficiently locate, disseminate and replicate objects identified by a unique ID. The Adaptive Probabilistic Search (APS) scheme utilizes directed walkers to forward queries on a hop-by-hop basis. Peers store success probabilities for each of their neighbors in order to efficiently route towards object holders. AGNO performs implicit grouping of peers according to the demand incentive and utilizes state maintained by APS in order to route messages from content holders towards interested peers, without requiring any subscription process. Finally, the Adaptive Probabilistic REplication (APRE) scheme expands on the state that AGNO builds in order to replicate content inside query intensive areas according to demand.
Graduated from NTUA ECE Dept in '99, joined the CS Dept of University of Maryland in Fall 2000, (1 year at NTUA with T. Sellis). MS and PhD in Comp Sci under Dr. Roussopoulos (00-06). From October 06 till now im a senior researcher at the computing systems Lab of NTUA, in charge of the GREDIA project (FP6)
Mr. William E. Johnston, Lawrence Berkeley National Laboratory
“Networking for the Future of Large-Scale Science: An ESnet Perspective” – Abstract:
The U.S. Dept. of Energy's Office of Science is a major funder of physical science research in the US, and the primary funder in several areas such as high energy physics. As scientific experiments get larger and more expensive, the size and scope of the associated collaboration communities increases. Supporting the analysis of scientific data from such experiments requires data management and analysis of massive amounts of data making use of compute and storage resources that are distributed in laboratories and universities all over the world. Increasingly such systems are being built using Grid/Web Services style Service Oriented Architecture (SOA) frameworks. For this to be successful in the highly distributed, data intensive science environment, the service components must have predictable, capable, and reliable communication: That is, the network must become another service element that can by integrated into SOA-like systems. This requires that networks provide new transport and security mechanisms, highly capable monitoring, etc., all of which are available as “service elements” that can be used in systems are built using service oriented architectures. This talk will describe the new ESnet network architecture and the progress toward a service-oriented, wide-area network environment that is designed to support the new science paradigm.
William E. Johnston is a Senior Scientist and Dept. Head of the US Dept. of Energy, Energy Sciences Network (ESnet) (www.es.net) in the Computational Research Division of the Computing Sciences Directorate of Lawrence Berkeley National Laboratory. His long time research interests include high-speed, wide area network based, distributed systems, widely distributed computational and data "Grids," Public-Key Infrastructure based security and authorization systems, and use of the global Internet to enable remote access to scientific, analytical, and medical instrumentation. Recent professional activities have included Dept. Head of the Distributed Systems Dept. in the Computational Research Division, and Principal Investigator for several US Dept. of Energy, Office of Science projects related to these topics. He is also co-founder (with Ian Foster and Charlie Catlett) of the Grid Forum (which merged with the European Grid Forum to form the Global Grid Forum, and has since merged with the equivalent industry group to become the Open Grid Forum). Mr. Johnston has worked in the field of computing for more than 35 years, and has taught computer science at the under graduate and graduate levels. He has a Masters Degree in Mathematics and Physics from San Francisco State University. He may be reached at wej@es.net. For more information see www.dsd.lbl.gov/~wej.
Ms. Eleni Kosta, ICRI - K.U. Leuven
“The European Legal Framework for Data Protection: an introduction to the processing of personal data and the new regime regarding the retention of traffic and location data” – Abstract:
For many computer scientists, the law is often viewed as a hindrance to technological developments. This view may be contrasted to that of the researchers within the Interdisciplinary Centre for Law & ICT (ICRI), who view law as an enabler for legally compliant technological developments. The objective of this talk is to present the general framework that has to be respected at European level, when processing of personal data is taking place during the development of an application or a system. The presentation is structured in two parts, and commences with a short presentation of the legal framework on privacy and data protection, mainly in the field of electronic communications. Our attention will mainly focus on the basic principles that have to be respected, the obligations of the developer of the system in order to follow the "privacy and security by design" model and the rights of the data subject. The second part of the presentation focuses on the recent European legislation on the retention of traffic and location data, and in doing so, articulates issues of major importance to providers of electronic communications and networks.
Eleni Kosta joined ICRI as a legal researcher in August 2005, where she conducts research in the field of privacy and identity management, specialising in the area of electronic communications and new technologies. Her current research projects include the European Project Privacy and Identity Management for Europe (PRIME) and the Network of Excellence Future of Identity in the Information Society (FIDIS). Eleni obtained her law degree at the University of Athens in 2002 (magna cum laude) and in 2004 she obtained at the same University a Masters degree in Public Law (summa cum laude). In the academic year 2004-2005 she attended the Postgraduate Study Programme in Legal Informatics (Rechtsinformatik) of the University of Hanover (EULISP) with a scholarship from the Greek State Scholarships Foundation (IKY) and she obtained her LLM (magna cum laude). Eleni is preparing a PhD at the Katholieke Universiteit Leuven on "Consent as a legitimate ground for data processing in electronic communications", under the supervision of Prof. Dr. Jos Dumortier.
Prof. Angelos Keromytis, Columbia University
“Data Sanitization: Improving the Forensic Utility of Anomaly Detection Systems” – Abstract:
Anomaly Detection has become an invaluable tool for forensic analysis and intrusion detection. Unfortunately, the detection accuracy of all learning-based ADs depends heavily on the quality of the training data, which is often poor, severely degrading their reliability as a protection and forensic analysis tool. In this talk, I will describe a new approach that extends training phase of an AD to include a sanitization phase that aims to improve the quality of unlabeled training data by making them as "attack-free" and "regular" as possible in the absence of absolute ground truth. The proposed scheme is agnostic to the underlying AD, boosting its performance based solely on training-data sanitization. The core of the approach is to generate multiple AD models for content-based AD sensors trained on small slices of the training data. These AD "micro-models" are used to test the training data, producing alerts for each training input. Voting among the micro-models is used to determine true anomalies from noise.
I will present some preliminary results that show that sanitization increases 0-day attack detection while maintaining a low false positive rate, increasing confidence to the AD alerts. And show an initial characterization of the performance of the system when we deploy sanitized versus unsanitized AD systems in combination with expensive host-based attack-detection systems. Finally, I will provide some preliminary evidence that our system incurs only an initial modest cost, which can be amortized over time during online operation.Joint work with Sal Stolfo, Angelos Stavrou, and Gabriela Cretu.
Angelos Keromytis is an Associate Professor with the Department of Computer Science at Columbia University, and director of the Network Security Laboratory. He received his B.Sc. in Computer Science from the University of Crete, Greece, and his M.Sc. and Ph.D. from the Computer and Information Science (CIS) Department, University of Pennsylvania. He is the author and co-author of more than 100 papers in refereed conferences and journals, and has served on over 60 conference and workshop program committees. He is an associate editor of the ACM Transactions on Information and Systems Security (TISSEC). He recently co-authored a book on using graphics cards for security, and is a co-founder of Revive Systems Inc, a technology start-up in the area of software reliability. His research interests revolve around systems and network security, and cryptography. His work is currently funded by the NSF, DHS, US Air Force, US Office of Naval Research, DARPA, New York State, NRO, DTO, Google, and Cisco.
Mr. Angelos Stavrou, Columbia University
“OpenTor: Anonymity as a Commodity Service” – Abstract:
Despite the growth of the Internet and the increasing concern for privacy of online communications, current deployments of anonymization networks depends on a very small set of nodes that volunteer their bandwidth. We believe that the main reason is not disbelief in their ability to protect anonymity, but rather the practical limitations in bandwidth and latency that stem from limited participation. This limited participation, in turn, is due to a lack of incentives. We propose providing economic incentives, which historically have worked very well.
In this talk, we present a payment scheme that can be used to compensate nodes which provide anonymity in Tor, an existing onion routing, anonymizing network. We show that current anonymous payment schemes are not suitable and introduce a hybrid payment system based on a combination of the Peppercoin Micropayment system and a new type of ``one use'' electronic cash. Our system claims to maintain users' anonymity, although payment techniques mentioned previously, when adopted individually,provably fail.This is joint work with Elli Androulaki, Mariana Raykova, and Steven M. Bellovin
Angelos Stavrou is currently a Research Assistant at the Network Security Laboratory at Columbia University. His research interests are Security using Peer-to-peer and Overlay Networks, Network Reliability, and Statistical Inference. He received his B.S. in Physics with honors from University of Patras, Greece and an M.Sc. in theory of Algoritms, Logic and computation from University of Athens, Greece. He also holds an M.Sc. in Electrical Engineering from Columbia University and he is currently working toward the Ph.D degree at the same university.
Dr. Niels Provos, Google Inc.
“The Ghost in the Browser: An Analysis of Web-based Malware” – Abstract:
As more users are connected to the Internet and conduct their daily activities electronically, computer users have become the target of an underground economy that infects hosts with malware or adware for financial gain. Unfortunately, even a single visit to an infected web site enables the attacker to detect vulnerabilities in the user's applications and force the download a multitude of malware binaries. Frequently, this malware allows the adversary to gain full control of the compromised systems leading to the ex-filtration of sensitive information or installation of utilities that facilitate remote control of the host. We believe that such behavior is similar to our traditional understanding of botnets. However, the main difference is that web-based malware infections are pull-based and that the resulting command feedback loop is looser. To characterize the nature of this rising thread, this talk identifies the four prevalent mechanisms used to inject malicious content on popular web sites: web server security, user contributed content, advertising and third-party widgets. For each of these areas, the talk presents examples of abuse found on the Internet. Our aim is to present the state of malware on the Web and emphasize the importance of this rising threat.
Niels Provos received a Ph.D. from the University of Michigan in 2003 where he studied experimental and theoretical aspects of computer and network security. He is one of the OpenSSH creators and known for his security work on OpenBSD. He developed Honeyd, a popular open-source honeypot platform, SpyBye, a client honeypot helping web masters to detect malware on their web pages, and many other tools such as Systrace, Stegdetect, etc. He is a member of the Honeynet project and an active contributor to open source projects. Niels Provos is currently employed as Senior Staff engineer at Google Inc.
“Tackling an old crime that discovered new tools: Computer Forensics as Deterrent of On-line ID Fraud” – Abstract:
As the use of the Internet for commercial transactions grows, so do the incidents of identity (ID) theft and related fraud. Computer security incidents involving the collection of personal information and other credentials of individuals for criminal purposes are reported frequently and they have a significant impact on people's trust in on-line transactions and electronic commerce. Confronting this type of information technology (IT) fuelled crime is therefore significant, especially in environments where IT artefacts and developments are adopted at a slower pace by the general public. In this respect, Computer Forensics is an emerging discipline that provides a crime deterrent and plays a critical role in the investigation of similar cases. In this lecture we first present an overview of the field of computer forensics and subsequently discuss the issue of on-line ID theft and the associated actions for its forensic examination.
Theo is a Lecturer at the Department of Electronics & Computer Systems Engineering of the University of Glamorgan, in Wales, United Kingdom. He specialises in Computer Forensics and Systems Audit. He is the MSc Scheme Leader of Glamorgan's postgraduate programmes in computer security. He also serves as an Expert Witness in Court for cases of computer-related crime involving identity theft, fraudulent on-line auctions, possession and distribution of child pornography, extortion etc. He has a PhD in the implementation of secure information systems from Athens University of Economics and Business (AUEB), Greece. He also holds an MSc in Information Systems (AUEB), a BSc in Computer Science (University of Crete, Greece) and he is a Certified Information Systems Auditor (CISA).
Prof.Yiannis Smaragdakis, University of Oregon
“Language Tools for Distributed Computing and Program Generation” – Abstract:
This talk first examines distributed computing from a languages/software engineering standpoint, and subsequently uses the distributed programming domain to motivate program generation: a general and powerful approach to automating programming tasks. We begin with a simple-to-state but important problem: how to define middleware that allows the programmer to think of a distributed application largely as a centralized one, while maintaining efficiency and expressiveness. Our NRMI middleware facility is the first to support call-by-copy-restore semantics for arbitrary pointer-based data, letting the programmer think of a remote procedure call much like a local one in many useful cases. We show that NRMI is significantly easier to use than traditional RPC middleware, yet maintains efficiency and full programmer control. If we take the task of simplifying distributed programming to the extreme, we encounter systems that allow unsuspecting programs to execute in a distributed environment. Typically such systems use program generation extensively. We briefly present our J-Orchestra system for automatically enabling Java programs to execute in a distributed setting. We then discuss the impact that the J-Orchestra program transformation techniques have had on a large open-source project (JBoss).
Yiannis Smaragdakis got his Ph.D. from the University of Texas at Austin and is currently an Associate Professor at the University of Oregon. His interests are on the systems and languages side of software engineering. More information on his work can be found at: http://www.cs.uoregon.edu/~yannis
Prof. Augusto Ciuffoletti, University of Pisa
“The token passing protocol: security and reliability” – Abstract:
The wandering token idea relies on a reliable token passing protocol. Such protocol must ensure that the token loss event is infrequent, and that duplication never occurs. In addition, it must allow mutual authentication among the partners. The seminar presents a technical solution to such problems, using a 4 ways UDP protocol. A Perl implementation of our solution is currently under test, and the presentation is somewhat technical.
Augusto Ciuffoletti is a researcher at the Department of Computer Science at the University of Pisa, where he teaches two courses about Internet. He established a strong collaboration with the Italian Institute of Nuclear Physics in the frame of their project for a Computing Grid, and participates to the CoreGRID project as an INFN member. His stay at FORTH institute is part of the of a CoreGRID activity.
“Telecom Fraud and Electronic Crime in Fix and Mobile Telecommunications” – Abstract:
Telecommunications fraud and electronic crime is a growing phenomenon all over the world. According to FIINA (Forum for International Irregular Network Access) an organization with members from more than 100 countries, Telecom Operators are loosing more than 50 billion Euros every year, worldwide, due to telecom fraud and electronic crime.How fraud and crime takes place:Telecom fraud and electronic crime is taking place in all kind of services Operators are offering. The motivation is to gain access to services without paying the relevant cost, or to get money (call selling - subscription fraud) by providing services to other people, using infrastructures with wrong identification (identity theft). Fix and Mobile Telecom Operators face new methods of fraud and electronic crime in their day to day business. In Creta island there are some very clever foreign mafia groups (from Middle East area), that were successful in the past years in defrauding fix and mobile operators. In the presentation some interesting business cases will be presented and the mafia groups mode of operation will be explained, by Mr. Michalis Mavis (invited speaker by ITE).
Mr. Mavis (michalis.mavis@gmail.com) was the founder and first Chairman of EFTA (Hellenic Fraud Forum), that was established in the year 2000, by the major Telecom Operators in Greece (COSMOTE, OTE, Vodafone and TIM) with the aim to fight telecommunications fraud and electronic crime amongst its members. He worked for OTE since 1977, from where he was voluntarily retired in Oct-2006, to work as independent consultant. In the past 7 years he worked as Head of OTE (Hellenic Telecom Organization) Security Audit and Fraud Control Division. Mr. Mavis worked as telecom security engineer in NATO Brussels between 1989-1993 and as Project Supervisor in EURESCOM (European Institute for Research & Strategic Studies in Telecommunications) in Heidelberg Germany between 1995-1999. He has been chairman of ETNO WG on Information Security (ETNO=European Telecom Network Operators Association), national rep in SOGIS (Senior Officials Group on Information Security a European Commission advisory group) and national member to ETSI TA & GA (Euroepan Telecom Standards Institute Technical and General Assembly).
“How to Buy a Network: Trading of Resources in the Physical Layer” – Abstract:
Recently, a number of new research initiatives, most notably UCLPv2 and GENI, have promoted the dynamic partition of physical network resources as the means to (a) operate the network and (b) to implement new protocols and services. This leads to a number of open issues such as resource discovery, implementation of resource partitioning, and the aggregation of resources to create arbitrary network topologies. To us, the key issue is the design of a mechanism to trade, acquire, and control the network resources, given a choice of resource providers. In this paper, we present an architecture that allows physical resources to be traded, while granting users controlled access to the acquired resources via a policy enforcement mechanism. In addition, it allows provider domains to be linked via configurable resource exchange points that are the physical layer equivalents of the pooling point, or Internet Exchange Point (IXP). We demonstrate how our trading system will operate by presenting a use case where a network topology is constructed using resources from multiple providers. The use case also shows how a dynamic reconfiguration can be effected by the customer though the use of simple access control policies, without involving the provider.
For Professor Prevelakis' CV, please visit this link
“Implications of Selfish Neighbor Selection in Overlay Networks and Other Problems in the Areas of Content Distribution and Overlay Networks” – Abstract:
The first part of my talk will be on the Implications of Selfish Neighbor Selection in Overlay Networks. Following, is a brief description of this work:"In overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or requests for content. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modelled by hop-counts, and nodes have potentially unbounded degree. In practice however, important side constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's (best response) wiring strategy to a directed version of the k-median problem and use this observation to obtain pure Nash equilibria. We study the performance of such wirings analytically and also experimentaly on synthetic topologies as well as on measurements and maps collected from PlanetLab and the AS-level Internet, respectively. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent, that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies."The second part of the talk will provide an overview of my research in the areas of content distribution and overlay networks. I will briefly touch upon the following:-- Distributed Selfish Replication and Caching-- Distributed Placement of Service Facilities in Large-Scale Networks-- Storage Capacity Allocation for Content Networks-- Analytic Modeling of Replacement Algorithms and Application to the Interconnection of Cache Memories-- The Cache Inference Problem and Its Application to Content and Request Routing-- Optimization of Playout Schedulers for Packet Video ReceiversI will conclude my talk with a few pointers into on-going and future research on the aforementioned and other related areas.
Nikos Laoutaris is a Marie Curie Outgoing International Post-doctoral Fellow at Boston University and the University of Athens. He received the Ph.D. degree in Computer Science from the Department of Informatics and Telecommunications of the University of Athens, Greece, in 2004, for his work in the area of Content Networking. He also holds an M.Sc. degree in Telecommunications and Computer Networks (2001) and a B.Sc. degree in Computer Science (1998), both from the same department. His main research interests are in the analysis of algorithms and the performance evaluation of Internet content distribution systems (CDN, P2P, web caching), overlay networks, and multimedia streaming applications. Detailed CV and publications are available at: Nikolaos Laoutaris, Marie Curie Postdoc Fellow Boston University and University of Athens e-mail: nlaout@cs.bu.edu, tel.: (617) 358-2376 web:http://cs-people.bu.edu/nlaout/ and he can be contacted on nlaout@cs.bu.edu
“An Overview of the State of the Art in Anonymous Communications” – Abstract:
Anonymous communications is a 25 year old field that has received a great deal of attention lately. This is due to the ubiquitous deployment of information networks, and the privacy concerns associated with their use. In this talk I shall explain the foundations of anonymous communications systems, their use cases and the threat models against which they attempt to protects users. Real world deployed systems, such as Tor, Mixminion, JAP and Anonymizer, will be used throughout to illustrate the presented theory -- as well as to study the extent to which these concrete implementations fulfill the security promises of the theoretical constructions. Finally I will present an outline of traffic analysis techniques, that attempt to trace communication patterns through anonymous communication systems, and outline their strengths and limitations.
George Danezis, B.A, M.A (Cantab), Ph.DGeorge Danezis is post-doctoral visiting fellow at the Cosic group, K.U.Leuven, in Flanders, Belgium. He has been researching anonymous communications, privacy enhancing technologies, and traffic analysis for the last 6 years, at K.U.Leuven and the University of Cambridge, where he completed his doctoral dissertation. His theoretical contributions to the PET field include the established information theoretic metric for anonymity and the study of statistical attacks against mix systems. On the practical side he is one of the lead designers of Mixminion, the next generation remailer, and has worked on the traffic analysis of deployed protocols such as SSL and Tor. He was the co-chair of the Privacy Enhancing Technologies Workshop in 2005 and 2006, he serves on the PET workshop board and has participated in> multiple conference and workshop program committees in the privacy and security field.
“Pushback for Overlay Networks: Detecting and Protecting against Malicious Insiders” – Abstract:
We present a novel mechanism for detecting and protecting structured overlay networks against non-conforming (abnormal) behavior of other participating nodes. We use a lightweight distributed detection mechanism that exploits inherent structural invariants of DHTs to ferret out anomalous flow behavior. To prevent identity spoofing leading to Sybil attacks, neighbor identities are established with pair-wise keys, which do not require an authentication infrastructure. Upon detection, a Pushback-like protocol is invoked to notify the predecessor whence the offending traffic is arriving. Recursive applications of the protocol can identify and isolate the offending node. We evaluate our me chanism's ability to detect attackers via simulation within a DHT network. The results show that our system can detect a simple attacker whose attack traffic deviates by as little as 5% from average traffic. We also demonstrate the resiliency of our mechanism against coordinated distributed flooding attacks that involve up to 15% of overlay nodes. We measure the effectiveness with which our approach identifies the offending node(s) and squelches the attacks. The detection and containment mechanisms presented show that overlays can protect themselves from insider DoS attacks, eliminating an important roadblock to their deployment.
“Application Communities: A Collaborative Approach To Software Security” – Abstract:
Software monocultures are usually considered dangerous because their size and uniformity represent the potential for costly and widespread damage. The emerging concept of collaborative security provides the opportunity to re-examine the utility of software monoculture by exploiting the homogeneity and scale that typically define large software monocultures. Monoculture can be leveraged to improve an application's overall security and reliability. We introduce and explore the concept of Application Communities: collections of large numbers of independent instances of the same application. Members of an application community share the burden of monitoring for flaws and attacks, and notify the rest of the community when such are detected. Appropriate mitigation mechanisms are then deployed against the newly discovered fault. In this talk, I will describe the concept of Application Communities, some of their basic operational parameters, and our preliminary work in demonstrating their feasibility.
Angelos Keromytis is an Associate Professor with the Department of Computer Science at Columbia University, and director of the Network Security Laboratory. He received his B.Sc. in Computer Science from the University of Crete, Greece, and his M.Sc. and Ph.D. from the Computer and Information Science (CIS) Department, University of Pennsylvania. He is the author and co-author of more than 100 papers on refereed conferences and journals. He recently co-authored a book on using graphics cards for security, and is a founder of Revive Systems Inc. His current research interests revolve around systems and network security, and cryptography. His recent work has been on self-healing software. Previous research interests include active networks, trust management systems, and systems issues involving hardware cryptographic acceleration. For a full CV, see http://www.cs.columbia.edu/~angelos/cv.html
“Error Virtualization: Building a Reactive Immune System for Software Services” – Abstract:
We propose a reactive approach for handling a wide variety of software failures, ranging from remotely exploitable vulnerabilities to more mundane bugs that cause abnormal program termination ( e.g., illegal memory dereference) or other recognizable bad behavior (e.g., computational denial of service). Our emphasis is in creating ``self-healing'' software that can protect itself against a recurring fault until a more comprehensive fix is applied.Briefly, our system monitors an application during its execution using a variety of external software probes, trying to localize (in terms of code regions) observed faults. In future runs of the application, the ``faulty'' region of code will be executed by an instruction-level emulator. The emulator will check for recurrences of previously seen faults before each instruction is executed. When a fault is detected, we recover program execution to a safe control flow. Using the emulator for small pieces of code, as directed by the observed failure, allows us to minimize the performance impact on the immunized application. We discuss the overall system architecture and a prototype implementation for the x86 platform. We show the effectiveness of our approach against a range of attacks and other software failures in real applications such as Apache, sshd, and Bind.In this talk, I will also present our on-going work on a new technique for retrofitting legacy applications with exception handling techniques, which we call {it Autonomic Software Self-Healing Using Error Virtualization Rescue Points} (ASSURE). ASSURE is a general software fault-recovery mechanism that uses operating system virtualization techniques to provide ``rescue points'' that an application can use to recover execution to, in the presence of faults. When a fault occurs at an arbitrary location in the program, we restore program execution to a ``rescue point'' and imitate its observed behavior to propagate errors and recover execution.
Stelios Sidiroglou is PhD student of Computer Science at Columbia University and a member of the Network Security Lab. He received his Masters and Bachelors in Electrical Engineering from Columbia University and WPI, respectively. His research interests include, software survivability, large-scale system security and privacy and anonymity.
Dr. Chryssis Georgiou, University of Cyprus
“Distributed Cooperation in Dynamically Changing Groups” – Abstract:
The problem of cooperatively performing a set of t tasks in a decentralized computing environment subject to failures is one of the fundamental problems in distributed computing. The setting with partitionable networks is especially challenging, as algorithmic solutions must accommodate the possibility that groups of communicating processors become disconnected (and, perhaps, reconnected) during the computation. The efficiency of task-performing algorithms is often assessed in terms of work: the total number of tasks, counting multiplicities, performed by all of the processors during the computation. In general, the scenario where the processors are partitioned into g disconnected components causes any task-performing algorithm to have work Ω(tg) even if each group of processors performs no more than the optimal number of Θ(t) tasks. Given that such pessimistic lower bounds apply to any scheduling algorithm, we pursue a competitive analysis. Specifically, in the talk I will present a simple randomized scheduling algorithm for p asynchronous processors, and compare its performance to that of an omniscient off-line algorithm with full knowledge of the future changes in the communication medium. I will describe the notion of computation width, which associates a natural number with a history of changes in the communication medium, and I will show both upper and lower bounds on work-competitiveness in terms of this quantity. Specifically, we will see that the simple randomized algorithm obtains the competitive ratio (1+ cw / e), where cw is the computation width and e is the base of the natural logarithm (e=2.7182...); this competitive ratio is tight. Finally, I will explain how the algorithm could be implemented using a Group Communication Service implementation while maintaining its work-competitiveness.Joint work with Alex A. Shvartsman and Alexander Russell
Undergraduate studies at the University of Cyprus, Cyprus (B.Sc. in Mathematics, 1998) and graduate studies at the University of Connecticut, U.S.A. (M.Sc. and Ph.D. in Computer Science and Engineering, 2002 and 2003 respectively). He has worked as a Teaching and Research Assistant at the University of Connecticut (1998-2003) and at the University of Cyprus (Visiting Lecturer 2004, Lecturer 2005-present). His research interests span the Theory and Practice of Distributed and Parallel Computing: in particular, Fault-tolerance and Dependability, Algorithms and Complexity, Communication Networks and Paradigms, Group Communication Systems, Distributed Atomic Objects, Cooperative and Non-cooperative Computations, and Dynamic Computing Environments. He has published in prestigious journals and conference proceedings in his areas of study.
Dr. Demetrios Zeinalipour-Yazti, University of Cyprus
“Top-K Query Processing Techniques for Distributed Environments” – Abstract:
Emerging applications in Sensor and Peer-to-Peer networks make the concept of data integration without centralization nowadays more meaningful than ever. In these environments, data is generated continuously and potentially automatically across geographically diverse locations. Organizing data in centralized repositories is becoming prohibitively expensive and in many occasions impractical. Storing data in-situ however, complicates query processing because data relations are fragmented over a number of remote sites. Furthermore, accessing these fragmented relations is only feasible by traversing a network of other nodes. This makes the execution of a query an even more complex task. We claim that in many occasions it might more beneficial to find the K highest ranked (or Top-K) answers, for some user defined parameter K, if this can minimize the query execution cost. In this talk, I will present techniques to efficiently answer Top-K queries in a distributed environment. A Top-K query returns the K highest ranked answers to a user defined similarity function. At the same time it also minimizes some cost metric, such as the utilization of the communication medium, which is associated with the retrieval of the desired answer set. I will provide an overview of state-of-the-art algorithms that solve the Top-K problem in a centralized setting and show why these are not applicable to the distributed case. I will then focus on the Threshold Join Algorithm (TJA), which is a novel solution for executing Top-K queries in a distributed environment. I will also present results from our performance study with a real middleware testbed deployed over a network of 75 workstations.
Demetrios Zeinalipour-Yazti is currently a Visiting Lecturer in Computer Science at the University of Cyprus. He holds a Ph.D. (2005) and M.Sc. (2003) in Computer Science and Engineering from the University of California - Riverside and a B.Sc. (2000) in Computer Science from the University of Cyprus. He has been a visiting researcher at the network intelligence lab of Akamai Technologies in 2004. His research interests include Database Management Systems, Network Data Management, Distributed Query Processing, Storage and Retrieval Methods for Peer-to-Peer and Sensor Networks.
Prof. Andreas Stathopoulos, College of William and Mary
“Performance Impacts of Autocorrelation on Multi-tiered Systems” – Abstract:
We present the essential runtime support mechanisms, techniques, programming interface, and performance of a user-level, runtime library, MMlib, that embeds memory malleability in scientific applications. MMlib automatically controls the DRAM allocations to specified objects in an application, to enable it to execute optimally under variable memory availability. In our earlier work we introduced a basic framework for MMlib, in which DRAM is treated as a dynamic size cache for large memory objects residing on local disk. Optimal use of the DRAM cache is achieved through an algorithm that accurately ascertains memory shortage and availability. In this talk we review this work and we describe three major enhancements to MMlib. First, we present a general framework that enables fully customizable, memory malleability in a wide variety of scientific applications. Second, we present solutions to some problems posed by the extended functionality of our framework, and some enhancements to its environment sensing capabilities. Third, we introduce a remote memory capability, based on MPI communication of cached memory blocks between `compute nodes' and designated memory servers. The increasing speed of interconnection networks makes a remote memory approach attractive, especially at the large granularity present in large scientific applications. We present experimental results from three important scientific applications that we link with MMlib. The results show that the memory-adaptive implementations of the applications perform nearly optimally under constant memory pressure and execute harmoniously with other applications competing for memory, without thrashing the memory system. We observe execution time improvements of factors between three and five over relying solely on the virtual memory system. These factors are further improved when remote memory is employed.
Andreas Stathopoulos received a BS in Mathematics in 1989 from University of Athens, Greece, and his MS and Ph.D. degrees in Computer Science from Vanderbilt University in 1991 and 1995 respectively. >From 1995 to 1997 he was an NSF CISE postdoctoral associate at the University of Minnesota. Currently, he is an associate professor in the department of Computer Science and affiliated with the Computational Sciences Cluster at the College of William and Mary. His research interests include numerical linear algebra, high performance computing, and scientific applications. He is a member of SIAM, IEEE and IEEE Computer.
Prof. Evgenia Smirni, College of William and Mary
Correlated arrival processes in Internet servers is routinely observed via measurements. In this talk, we focus on multi-tiered systems with correlation in their arrival and/or service processes and on the impact of autocorrelation on system performance. We consider (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems ) and (b) systems with infinite buffers (i.e., systems that operate as open systems). We present experimental measurements and analytic models that show how autocorrelation in the arrival/service process propagates into the system and affects end-to-end performance. For the case of finite buffer systems, we use measurements from a 3-tier e-commerce server under the TPC-W workload and show the presence and propagation of autocorrelated flows in {em all} tiers of the system, despite the fact that the stochastic processes used to generate this session-based workload are independent. We attribute this effect to the existence of autocorrelation in the service process of one of the tiers. In contrast to systems with independent flows, autocorrelation in the service process may result in very high user system response times despite the fact that bottleneck resources are not highly utilized, and measured throughput and device utilization levels are modest. This, falsely indicates that the system can sustain higher capacities. We present a small queuing network that help us understand the above counter-intuitive behavior. For the system with infinite buffer size, with performs as an open system, we present an analytic model that approximates the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be also autocorrelated. A BMAP/MAP/1 queue can be considered as a basic building block of an analytic model of a multi-tiered system. This analytic model can be used to model each tier in isolation and to understand how autocorrelation can affect performance in multi-tiered systems with infinite buffers. We present results of the effectiveness of this approximation and conclude by comparing the performance effects of autocorrelation in multi-tired systems with finite and infinite buffers.
Evgenia Smirni is the William and Martha Claiborne Stephens Associate Professor at the College of William and Mary, Department of Computer Science, Williamsburg, Virginia 23187-8795 (esmirni@cs.wm.edu). She received her Diploma in Computer Engineering and Informatics from the University of Patras, Greece, in 1987, and her M.S. and Ph.D. in Computer Science from Vanderbilt University in 1993 and 1995, respectively. >From August 1995 to June 1997 she had a postdoctoral research associate position at the University of Illinois at Urbana-Champaign. Her research interests include analytic modeling, stochastic models, Markov chains, matrix analytic methods, resource allocation policies, Internet systems, workload characterization, and modeling of distributed systems and applications. She has served as program co-chair of QEST'05 and of ACM SIGMETRICS/Performance'06. She is a member of ACM and IEEE.
“One-Way monitoring without clock synchonization” – Abstract:
network paths usually exhibit asymmetric performance, and monitoring tools that are unable to discriminate their measurements in each direction fail to give an important piece of information. According to common sense, one-way measurements require hardware clock synchronization, which is expensive and troublesome. We show how relevant one-way measurements can be effectively performed without hardware clock synchronization, with a surprising application of a computer graphics algorithm: the Graham Scan.
“A modular approach to network monitoring in a Grid environment” – Abstract:
the communication infrastructure is a cornerstone of a Grid, and its monitoring is vital in order to use efficiently storage and computing facilities. We introduce GlueDomains, a network monitoring architecture designed for Grid environments. It is based on a modular view of the network, supports a centralized management of the monitoring activity, and offers a plugin interface for the publication of measurements.
Konstantin Popov is a senior reseracher in the distributed systems laboratory (DSL) at the Swedish Institute of Computer Science (SICS). He is involved with the development, implementation and maintenance of the programming language Oz and the programming system Mozart since its very early days more than 10 years ago. Recently he closely collaborated in the iCities project where he contributed the large-scale, parallel version of the simulation system. Currently he is actively involved in the Grid-related research, and manages SICS's participation in the European NoE CoreGrid. His scientific interests include also the implementation of programming languages, in particular high-performance parallel and distributed programming systems, as well the foundations of programming languages such as formal semantics and type systems.
Dr. Vladimir Vlassov received his Ph.D. in Computer Engineering from the Electrotechnical University of St. Petersburg, Russia (1984), where he was an Assistant Professor and an Associate Professor during 1985-1993. In 1993, he joined the Department of Microelectronics and Information Technology, Royal Institute of Technology (KTH), Stockholm, Sweden, where he is currently a Associate Professor in Computer Systems. In 1998, Dr. Vlassov worked as a visiting scientists in the Laboratory for Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, USA. In 2004, we worked as a researcher at the Electrical and Computer Engineering department at the University of Massachusetts (UMASS) Amherst, USA. His current research interests include parallel and distributed computing, peer-to-peer and Grid computing, web services and agents.
“Towards Semantics-Based Resource Discovery for the Grid” – Abstract:
We present our practical experience in implementing agent-based systems for provision and selection of Grid services. The agents form a marketplace where services are offered and searched. Agents communicate semantic information about services using the Web service ontology (OWL-S). We describe our implementations that are built using Globus Toolkit, utilize the JADE agent framework and the off-the-shelf OWL-S toolkit. This combination of technologies can be also used for (semi-) automatic composition of services. Our evaluation captures the relative costs of different stages during service provision and selection that allows detecting potential bottlenecks. The preliminary evaluation already suggests that representation of semantics information and in particular existing solutions for reasoning on the semantic information need major improvements.
Short Bio:
Mr. Mika Pennanen received his M.Sc. degree in computer science (specialisation area: Intelligent Systems) from the University of Helsinki. Title of the thesis was "Agents In Virtual Home Environment". He has researched at VTT since 2000 and research interests are focusing mobile agents and peer-to-peer technology. Since 2002, he has been a project manager in several projects that are focusing grid and p2p technologies. Also he has co-operating with other Finnish universities in research areas as grid and p2p technologies. He has experience in national and international projects relating to Mobile Internet, distributed systems (including grid and p2p) and agent technology. He has actively participated several international projects and project proposals.
Mr. Mika Penannen, VTT
“Service and resource discovery using p2p” – Abstract:
In the near future, the way of using the Internet is changing. The Internet and services are available for millions of small mobile terminal devices. Additionally, different network technologies make things a bit more complicated. The convergence of networks, terminal devices, and their user interface raise new demands on services. Many of these challenges can be solved by deploying an efficient service and resource discovery mechanism.
Prof. Paolo Trunfio, University of Calabria
“P2P models and services for resource discovery on Grids” – Abstract:
Resource discovery is a key issue in Grid environments, since applications are usually constructed by composing hardware and software resources that need to be found and selected. Classical approaches to Grid resource discovery, based on centralized or hierarchical approaches, do not guarantee scalability in large-scale, dynamic Grid environments. On the other hand, the Peer-to-Peer (P2P) paradigm is emerging as a convenient model to achieve scalability in distributed systems and applications. This presentation will describe a protocol and an architecture that adopt a pure-decentralized P2P approach to support resource discovery in OGSA-compliant Grids. In particular, the P2P protocol uses appropriate message buffering and merging techniques to make Grid Services effective as a way to exchange discovery messages in a P2P fashion. Current approaches to resource discovery in Grid environments and possible research directions in this field will be also discussed.
Paolo Trunfio received his PhD in systems and computer engineering at the University of Calabria, Italy, in 2005. He is currently an assistant professor of computer engineering at the same university. His research interests include distributed systems, grid computing, web services, and peer-to-peer systems.
“ "SIZEUXIS" Project - Analysis and Implementation” – Abstract:
SIZEUXIS is one of the most important projects of the Information Society Programme in the context of the 3rd Community Support Framework (CSF). This project involves telecommunication and telematics services of big scale and covers the whole set of Public Organizations (Ministries, General Secretariats, Municipalities, Organizations of Local Authority, Prefectures, Citizen Service Centers, Hospitals, Medical Centers, Military Offices, CSF Managing Authorities), covering the complex needs for combining advanced services for speech, data and images. The main goal of this project is to enhance the communication among public services, as well as their communication with the citizens, having as main vehicle the creation of the SIZEUXIS network. This network will constitute the foundation on which the development of advanced telecommunications and telematics services will be based, as an aggregated service for citizens, with automated and user friendly information systems for transactions with Public Parties. A project with these specifications demands from the partners involved to implement and combine a set of innovative technological solutions in various areas (infrastructure, network architecture, network management), which will be the focus of this presentation.
Kostas Strakadounas was born in 1969 in Athens. He is a graduate of the Computer Science Department, University of Crete and holds an MSc in Computer Networks and Digital Communications, from the same University. He also holds an M.B.A. from the A.L.B.A. He worked in the Foundation for Research and Technology-Hellas from 1991 to 1994, in the area of network design and operation. Since 1997 he has been working in FORTHnet, where he is responsible for the design and development of the company's network.
“Towards a spacially-explicit model of mobile malware dynamics” – Abstract:
Understanding malware propagation dynamics is important for assessing the threat of different types of malware and for developing suitable countermeasures. Although significant progress has been made towards understanding Internet worms, very little has been done on the problem of mobile worms. In contrast to Internet worms, which given the global connectivity of the Internet can spread on an almost fully connected graph, the propagation of mobile worms is spatially constrained as a node can only infect other nearby nodes, such as nodes that are concurrently on the same wireless LAN. Because of such spatial constraints, Internet worm analyses and models do not apply to mobile worms. In this project we borrow spatially explicit modelling tools that are well-established in ecological modelling and discuss how they relate to mobile malware. In particular, we rely on cellular automata (C.A.) that have been widely used for the analysis of the spatial movement of individual fish schools, the spatial response of individual fishing boats and in modelling fire dynamics. Observing the similarity between fish movement, fire propagation and malware spreading, we propose a spatially explicit model that uses the knowledge derived from ecological modelling examples in worm spreading dynamics. We present some preliminary observations on how the motion of the worms can be analyzed based on assumptions about time-dependent gradients in the relative attractiveness of nearby grid cells reflecting mobility and malware propagation properties.
Aristidis Moustakas is a member of the scientific staff and a doctoral candidate at the Institute of Ecology of the Friedrich Schiller University in Germany. He holds a diploma in industrial engineering from the Technical University of Crete and a M.Sc. in environmental engineering from the same university. His main areas of interest are complex systems, ecosystem analysis and long-term climate and vegetation change.
Prof. Mema Roussopoulos, Harvard University
“P2P: More than Just (Illegal) File-Sharing Systems” – Abstract:
Peer-to-peer systems have become synonymous with file-sharing systems. Much of the focus of research in this area has been on providing algorithms to improve the efficiency, robustness, and security of routing in peer-to-peer systems, or designing services such as indexing and search for use by file-sharing applications running on these systems. There has been less focus on discovering new applications or enumerating the characteristics of applications for which peer-to-peer systems provide a viable, if not the only, solution. The purpose of this talk is to show that peer-to-peer is more that just (illegal) file-sharing. I will review the definition of peer-to-peer and describe a set of characteristics of applications for which peer-to-peer systems are a necessary solution. I will then describe two example applications: LOCKSS, a peer-to-peer digital preservation system, and Blossom, a decentralized peer-to-peer approach to overcoming systemic Internet fragmentation.
Mema Roussopoulos is currently an Assistant Professor of Computer Science on the Gordon McKay Endowment at Harvard University. Before joining Harvard, she was a Postdoctoral Fellow in the MosquitoNet Group at Stanford University. She received her PhD and Master's degrees in Computer Science from Stanford, and her Bachelor's degree in Computer Science from the University of Maryland at College Park. Her interests are in the areas of distributed systems, networking, mobile computing, and digital preservation.
Dr. Leonidas Kontothanassis, Cambridge Research Lab
“Content Delivery for Streaming Media” – Abstract:
In this talk I will cover the inner workings of a commercial content delivery network for streaming media. I will start by showing how the network is organized and will then show how end users are mapped to network nodes, how load balancing works and how content migrates from an origin node to the network nodes. I will also examine the transport mechanisms that ensure a high quality end-user experience while keeping transmission costs low. Finally I will cover other issues of operating such a network including collecting usage information, allowing authenticated only access and integrating it with a storage subsystem.
Leonidas Kontothanassis received his bachelors in Computer Engineering from the University of Patras and his M.S. and Ph.D. degrees in Computer Science from the University of Rochester. He has since been a member of the Cambridge Research Lab with a four year hiatus as the principal streaming architect for Akamai Technologies. His research work and interests are in the areas of computer architecture, distributed systems, streaming services, and most recently medical applications. For a detailed list of publications, patents and research interests please visit http://www.hpl.hp.com/personal/Leonidas_Kontothanassis/
“Scheduling on the Grid: Research at the University of Manchester” – Abstract:
This talk will present research that takes place at the University of Manchester on Grid Scheduling. The talk will argue for the need to understand better the mechanics of scheduling on the Grid and heterogeneous systems in general. The argument will be illustrated using essentially three different problems where better understanding and appropriate heuristics for scheduling at the middleware level can improve application performance significantly. The three problems are related to: (i) mapping workflows represented as a Directed Acyclic Graph; (ii) mapping parallel query plans on the Grid; and (iii) scheduling jobs onto compute resources using brokers and service level agreements. Solutions and experimental results to aspects of those problems will also be presented.
When he is not in Greece, Rizos Sakellariou teaches and carries out research at The University of Manchester, School of Computer Science. His main research interests are related to software for parallel and distributed systems. He has earned his PhD from The University of Manchester in 1997 and has held appointments with Rice University, Universitat Politecnica de Catalunya and University of Cyprus. More detail is available from http://www.cs.man.ac.uk/~rizos
Prof. Constantinos Dovrolis, Georgia Institute of Technology
“Buffer Sizing for Congested Internet Links” – Abstract:
Packet buffers in router/switch interfaces constitute a central element of packet networks. The appropriate sizing of these buffers is an important and open research problem. Much of the previous work on buffer sizing modeled the traffic as an exogenous process, i.e., independent of the network state, ignoring the fact that the offered load from TCP flows depends on delays and losses in the network. In TCP-aware work, the objective has often been to maximize the utilization of the link, without considering the resulting loss rate. Also, previous TCP-aware buffer sizing schemes did not distinguish between flows that are bottlenecked at the given link and flows that are bottlenecked elsewhere, or that are limited by their size or advertised window. In this work, we derive the minimum buffer requirement for a Drop-Tail link, given constraints on the minimum utilization, maximum loss rate, and maximum queueing delay, when it is feasible to achieve all three constraints. Our results are applicable when most of the traffic (80-90%) at the given link is generated by large TCP flows that are bottlenecked at that link. For heterogeneous flows, we show that the buffer requirement depends on the harmonic mean of their round-trip times, and on the degree of loss synchronization. To limit the maximum loss rate, the buffer should be proportional to the number of flows that are bottlenecked at that link, when that number exceeds a certain threshold. The maximum queueing delay constraint, on the other hand, provides a simple upper bound on the buffer requirement. We also describe how to estimate the parameters of our buffer sizing formula from packet and loss traces, evaluate the proposed model with simulations, and compare it with two other buffer provisioning schemes.
Constantinos Dovrolis is an Assistant Professor at the College of Computing of the Georgia Institute of Technology. He received the Computer Engineering degree from the Technical University of Crete (Greece) in 1995, the M.S. degree from the University of Rochester in 1996, and the Ph.D. degree from the University of Wisconsin-Madison in 2000. His research interests include methodologies and applications of network measurements, overlay networks, router architectures, and routing security.
“On the Predictability of Large Transfer TCP Throughput” – Abstract:
With the advent of overlay and peer-to-peer networks, Grid computing, and CDNs, network performance prediction becomes an essential task. Predicting the throughput of large TCP transfers, in particular, has attracted much attention. In this work, we focus on the design, empirical evaluation, and analysis of TCP throughput predictors for a broad class of applications. We first classify TCP throughput prediction techniques into two categories: Formula-Based (FB) and History-Based (HB). Within each class, we develop representative prediction algorithms, which we then evaluate empirically over the RON testbed. FB prediction relies on mathematical models that express the TCP throughput as a function of the characteristics of the network path (e.g., RTT, loss rate, available bandwidth). FB prediction does not rely on previous TCP transfers in the given path, and it can be performed with non-intrusive network measurements. We show, however, that the FB method is accurate only if the TCP transfer is window-limited to the point that it does not saturate the underlying path, and explain the main causes of the prediction errors. HB techniques predict the throughput of TCP flows from a time series of previous TCP throughput measurements on the same path, when such a history is available. We show that even simple HB predictors, such as Moving Average and Holt-Winters, using a history of limited and sporadic samples, can be quite accurate. On the negative side, HB predictors are highly path-dependent. Using simple queueing models, we explain the cause of such path dependencies based on two key factors: the load on the path, and the degree of statistical multiplexing.
“End-to-end Estimation of the Available Bandwidth Variation Range” – Abstract:
The available bandwidth (avail-bw) of a network path is an important performance metric and its end-to-end estimation has recently received significant attention. Previous work focused on the estimation of the average avail-bw, ignoring the significant variability of this metric in different time scales. In this paper, we show how to estimate a given percentile of the avail-bw distribution at a user-specified time scale. If two estimated percentiles cover the bulk of the distribution (say 10% to 90%), the user can obtain a practical estimate for the avail-bw variation range. We present two estimation techniques. The first is iterative and non-parametric, meaning that it is more appropriate for very short time scales (typically less than 100ms), or in bottlenecks with limited flow multiplexing (where the avail-bw distribution may be non-Gaussian). The second technique is parametric, because it assumes that the avail-bw follows the Gaussian distribution, and it can produce an estimate faster because it is not iterative. The two techniques have been implemented in a measurement tool called Pathvar. Pathvar can track the avail-bw variation range within 10-20%, even under non-stationary conditions. Finally, we identify four factors that play a crucial role in the variation range of the avail-bw: traffic load, number of competing flows, rate of competing flows, and of course the measurement time scale.
Dr. Fotis Liotopoulos, Aristotle University of Thessaloniki
“Security and Performance Issues for Computing and Networking systems to support Large-Scale Electronic Elections” – Abstract:
During the last few years, there has been significant research activity on issues related to electronic voting (e-voting) and electronic elections (e-lections). In their majority, they deal mostly with security issues and less with performance analysis of the problem. In this talk, we examine both security as well as performance issues that are relevant to conducting "large scale e-lections", that is, e-lections that are carried out at a national level, with the participation of millions of voters. In such elections, security and performance issues are inter-related.
For an integrated approach of the problem, in this talk we propose:a) A communication security protocol for large-scale e-lections based on combined symmetric and asymmetric cryptography.b) A performance analysis model for a multi-tier, client-server system, based on a closed queuing network, which is solved using the "exact mean value analysis" method.c) A procedure for ensuring the integrity and validity of the e-lection process, with respect to the management of the central information systems.d) A method that ensures verifiability of the final election results, based on public encrypted files.
Dr. Fotis K. Liotopoulos received his diploma in Computer Engineering from the Univ. of Patras, Greece, in 1988 and his M.Sc. and Ph.D. degrees from the Computer Sciences and Electrical & Computer Engineering dept. of the Univ. of Wisconsin-Madison, USA, in 1991 and 1996, respectively. He worked with the Research Academic Computer Technology Institute (ra-CTI) as a senior research engineer, research-unit manager and consultant for the public and private sector. He worked as a Lecturer and Senior Lecturer for CITY College, an affiliated institution of the Univ. of Sheffield, where he taught graduate (M.Sc.) and undergraduate courses in Computer Networks and Computer Architecture. He has taught Computer Architecture at the ECE dept. of the Aristotle University of Thessaloniki as a PD407 adjunct lecturer and he has been a part-time teaching associate with the Hellenic Open University for the last 4 years, teaching graduate and undergraduate courses in Computer Architecture, Digital Systems and Computer Networks. Dr. Liotopoulos is the author or co-author of more than 30 research papers and has served as a reviewer and/or committee member for several international journals, conferences and books. His areas of interest include computer and network architectures and protocols, digital communications, broadband switching and routing, parallel & distributed architectures, multiprocessor / multicomputer systems, parallel processing and performance analysis & optimization. He has participated in several R&D projects within the scope of his expertise. Dr. Liotopoulos is a senior member of IEEE, member of ACM, the British Computing Society, American Mathematical Society and SIGMA-XI, member of the IEEE Technical Committees TCCSR, TCCA, TCSA and member of the Editorial Board of the "Wireless Communications & Mobile Computing" Journal (Wiley & Sons).
Prof. Alexandros Labrinidis, University of Pittsburgh
“Data Management for Sensor Networks: Challenges and Success Stories” – Abstract:
Sensor Networks have recently received significant attention from many different research communities. In this talk, we present some of the data management challenges in deploying efficient sensor networks. We then briefly describe two related projects of the Advanced Data Management Technologies Lab at the University of Pittsburgh: TiNA, which deals with efficient in-network data aggregation and GaNC, which addresses semantic-aware network configuration in sensor networks.
Alexandros Labrinidis received B.Sc. and M.Sc. degrees in Computer Science from the University of Crete, Greece (1993 & 1995), and M.Sc. and Ph.D degrees in Computer Science from the University of Maryland, College Park (1997 & 2002). He is currently an assistant professor at the University of Pittsburgh and an adjunct assistant professor at Carnegie Mellon University. He is also the information director for ACM SIGMOD and an associate editor for SIGMOD Record. His research interests include web-aware data management, mobile data management, data warehousing, p2p data management and data management in sensor networks. In 2005, he served as the program co-chair of MobiDE 2005 (the 4th International ACM Workshop on Data Engineering for Wireless and Mobile Access, colocated with SIGMOD) and DMSN (the 2nd International VLDB Workshop on Data Management for Sensor Networks, colocated with VLDB).
Dr. Evangelos Kotsovinos, University of Cambridge
“Global Public Computing: Deploying global-scale services for fun and profit” – Abstract:
High-bandwidth networking and cheap computing hardware are leading to a world in which the resources of one machine are available to groups of users beyond their immediate owner. Grid computing and similar schemes target particular usage scenarios, where simplifying assumptions . centralised ownership of resources, cooperative users, and trusted applications can be made. Members of the public who are not involved in Grid communities or wish to deploy out-of- the-box distributed services, such as game servers, have no means to acquire resources on large numbers of machines around the world to launch their tasks. In this talk, I present a new distributed computing paradigm, termed global public computing, which allows any user to run any code anywhere, by pricing computing resources, and ultimately charging users for resources consumed. I describe the design and implementation of the XenoServer Open Platform, putting this vision into practice, and present results of experimental evaluation, showing that the platform is efficient and scalable; it allows the global-scale deployment of complex services in less than 45 seconds, and could scale to millions of concurrent sessions without presenting performance bottlenecks. I will present effective service deployment models for launching distributed services on large numbers of machines around the world quickly and efficiently. Also, I will outline issues related to trust management for global public computing systems, and discuss potential mechanisms for facilitating trust in such environments.
Evangelos Kotsovinos is a Research Associate at the Systems Research Group, Computer Laboratory, University of Cambridge, where he recently completed his PhD. His work is on the XenoServer Open Platform, an infrastructure that allows the low-cost deployment of untrusted global-scale services. His research interests are in the areas of large-scale distributed systems, resource management, trust management, and ubiquitous computing. Previously, Evangelos obtained his first degree in Computer Science from the University of Crete in Greece, and conducted research at ICS-FORTH on distributed multimedia servers.
“Bandwidth Estimation in Packet Networks: Measurement Techniques and Applications” – Abstract:
In a packet network, the terms "bandwidth" or "throughput" often characterize the amount of data that a network path can transfer per unit of time. Bandwidth estimation is a relatively recent research area that aims to infer the capacity and available bandwidth of an end-to-end network path, based on non-intrusive measurements at the path end-hosts. Bandwidth estimation can be used in adaptive streaming applications, TCP throughput optimization, overlay network routing, peer-to-peer file distribution, and traffic engineering. In this talk, I will summarize the main contributions of our Bandwidth Estimation project. Specifically, I will present the principles of packet pair and packet train dispersion, and the properties of self-loading periodic streams. These techniques have been used in the development of two active measurement tools, called Pathrate and Pathload. Pathrate measures the end-to-end capacity (bottleneck bandwidth) and Pathload measures the end-to-end available bandwidth of a network path. Finally, I will explain how to use bandwidth estimation in automatic TCP socket buffer sizing.
Prof. Marios Dikaiakos, University of Cyprus
“Test-driving the grid” – Abstract:
An important factor that needs to be taken into account by end-users and middleware components when mapping applications to the Grid, is the performance capacity of hardware resources attached to the Grid and made available through its Virtual Organizations (VOs). In this talk, we examine the problem of characterizing the performance capacity of Grid resources using benchmarking. We discuss the conditions under which such characterization experiments can be implemented in a Grid setting and present the challenges that arise in this context. We specify a small number of performance metrics and propose a suite of micro-benchmarks to estimate these metrics for clusters that belong to large Virtual Organizations. We describe GridBench, a tool developed to administer benchmarking experiments, publish their results, and produce graphical representations of their metrics. We describe benchmarking experiments conducted with, and published through GridBench, and show how they can help end-users assess the performance capacity of resources that belong to a target Virtual Organization. Finally, we examine the advantages of this approach over solutions implemented currently in existing Grid infrastructures. We conclude that it is essential to provide benchmarking services in the Grid infrastructure, in order to enable the attachment of performance-related metadata to resources belonging to Virtual Organizations and the retrieval of such metadata by end-users and other Grid systems.
Undergraduate Studies at the National Technical University of Athens, Greece (Dipl.-Ing. in Electrical Engineering, summa cum laude, 1988) Graduate Studies at Princeton University, USA (M.A. and Ph.D in Computer Science, 1991 and 1994). He has worked as Research Associate and taught at the University of Washington in Seattle (1994-1995) and at the University of Cyprus (Visiting Assistant Professor, 1996). His research interests include Parallel and Distributed Systems with an emphasis to Performance Evaluation and the Internet.
Dr. Panos Trimintzios, University of Surrey
“Traffic Engineering for Quality of Service Provisioning of IP Differentiated Services Networks” – Abstract:
Next Generation IP-based Networks will offer Quality of Service (QoS) guarantees by deploying technologies such as Differentiated Services (DiffServ) and Multi-Protocol Label Switching (MPLS) for traffic engineering and network-wide resource management. Despite the progress already made in the standardisation of Diffserv and MPLS, a number of issues such as the edge-to-edge intra-domain and inter-domain QoS provisioning and management, were left unspecified. In this talk we will focus on an extended Bandwidth Broker architecture for the management and control of such networks, including the following aspects: emerging Service Level Agreements (SLAs) and Service Level Specifications (SLSs) for the subscription to QoS-based services; algorithms for off-line traffic engineering and provisioning; and a policy-based overall framework for traffic engineering and service management. .
Dr. Panos Trimintzios received his PhD from the University of Surrey UK in 2004, while he also holds a BSc and an MSc in Computer Science both from the University of Crete, Greece, received in 1996 and 1998 respectively. Since 1998 he is a Research Fellow at the Centre for Communication Systems Research (CCSR) within the Electronic Engineering Department of the University of Surrey. His main research interests include QoS provisioning, traffic engineering, network performance control, policy-based networking, network monitoring, network and service management.
“Congestion Control for Sensor Networks” – Abstract:
Wireless sensor networks operate under light load and then suddenly become active in response to detected or monitored events. This results in potentially large, sudden, correlated impulses of data being generated by the sensor field that must be delivered to a small set of sinks without significantly disrupting the fidelity of sensing applications. We observe that it is during these impulse periods that congestion is likely and the information being transported of most importance - and therefore most likely to be lost. We believe that without solving this congestion problem the wide-scale adoption of self-organizing sensor network technology could be jeopardized. In this talk, we will discuss this problem and detail one solution for alleviating it called CODA (COngestion Detection and Avoidance).
Andrew T. Campbell is an Associate Professor of Electrical Engineering at Columbia University and a member of the COMET Group. Andrew is working on emerging architectures and programmability for wireless networks. He received his PhD in Computer Science in 1996, and the NSF CAREER Award for his research in programmable mobile networking in 1999. Currently, he is on sabbatical as a UK EPSRC Visiting Fellow at the Computer Lab, Cambridge University.
Mr. Nikos Hardavellas, Carnegie Mellon
Abstract:
Coherence misses in shared-memory multiprocessors account for a substantial fraction of execution time in many important scientific and commercial workloads. This talk presents the novel observation that the order in which shared data are consumed by one processor is similar to the order in which they were produced by another. We investigate this phenomenon, called temporal correlation, and demonstrate that it can be exploited to send Store-ORDered Streams (SORDS) of shared data from producers to consumers, thereby eliminating coherent read misses. Based on this observation, we present a design that uses a set of cooperating hardware predictors to extract temporal correlation from shared data, and mechanisms for timely forwarding of these data. Analysis of our SORDS design shows that it can eliminate between 36% and 100% of all coherent read misses in scientific workloads and between 23% and 48% in OLTP workloads.
Nikos Hardavellas received a BSc in Computer Science from University of Crete, Heraklion in 1995, a MSc in Computer Science from University of Rochester, NY in 1997, and is currently pursuing a PhD in Computer Science at Carnegie Mellon. From 1997 to 2002 he held senior software engineer positions in Digital, Compaq, and Hewlett-Packard, where he contributed to the design of the Alpha EV6 (21264) through EV9 (21564) generations of microprocessors and the GS320, ES45, and GS1280 multiprocessor systems. In 2001 he joined the design team of a new high-end Itanium-based HP server. While at University of Crete he contributed to the TelePACS project as a member of the Center for Medical Informatics and Health Telematics Applications, ICS, FORTH.
“Secure Overlay Services (SOS): How to combat DDoS attacks without (a lot of) network support” – Abstract:
In this talk, I will discuss the Secure Overlay Services (SOS) architecture, which provides service to hosts targeted by DDoS attacks by using a combination of overlay networking, distribute firewalling, and simple packet filtering. I will describe the original architecture as well as two extensions, WebSOS and SpreadSpectrum, that address different deployment and use strategies of the basic approach. Our prototype imposes a two-fold increase on end-to-end latency when subjected to a DDoS attack, making it a viable approach to providing uninterrupted service in the face of attacks.
Angelos Keromytis has been an assistant professor with the Department of Computer Science at Columbia University since 2001, and director of the Network Security Laboratory. He received his B.Sc. in Computer Science from the University of Crete, Greece, and his M.Sc. and Ph.D. from the Computer and Information Science (CIS) Department, University of Pennsylvania. His current research interests involve around systems and network security, and cryptography. Previous research interests include active networks, trust management systems, and systems issues involving hardware cryptographic acceleration. His recent work has been on survivable system and network architectures.
“Protecting the Internet Infrastructure” – Abstract:
Most users think of the Internet as the collection of services that are available; mostly the Web, email, and data sharing. All Internet services, however, depend on routers to forward traffic, and links between routers over which this traffic flows. These, the Internet Infrastructure, are increasingly the target of attacks, and protecting them is a much harder problem that protecting end-services. This talk gives an overview the Internet infrastructure, presents the current Distributed Denial-of-Service attacks and Routing attacks that are plaguing it, and discusses current and future research in this area.
John Ioannidis is a Senior Research Scientist at Columbia University. The underlying theme of his research has been protecting large-scale infrastructures. In recent years, he was worked on ways of improving the state of interdomain routing with emphasis on scalable and incrementally-deployable protocols. Ioannidis has also worked on methods to defend against distributed denial of service attacks, in particular the Pushback mechanism, a self-regulating network-based approach to alleviating congestion caused by DdoS attacks. He has also done extensive work on Trust Management and its applications. Older work includes the original Mobile-IP work swipe (the precursor to IPsec), and the original implementations of Ipsec for BSD and FreeS/WAN.
“An End–Point Solution to Zero–day Worms” – Abstract:
I will present a reactive mechanism that protects software services against network worms and other similar malware for which no known fix is available at the time of infection. The system works by automatically patching the vulnerable software. Our preliminary results against worms like Slammer and Blaster indicate an 80% success rate in automatically identifying and fixing the flaw in the source code. I will discuss the design, implementation, experimental evaluation, and limitations of the system, as well as our plans for overcoming these. The system is part of SABER, a survivable services architecture developed at the Network Security Lab at Columbia.
Prof. Calton Pu, Georgia Tech
“Infosphere: A Midterm Update on Infopipes” – Abstract:
In the pervasive/ubiquitous computing environment of the near future, we will have many computers per person and many sensors linking these computers to the real world. In the Infosphere project, we have been building systems software tools to support information flow-driven applications. We have proposed the Infopipe abstraction to link information producers to consumers. In addition to their basic function of transporting information, Infopipes manage and manipulate the concrete delivery properties of the information flowing through them, such as freshness. Infopipe creation and composition involve the specification of these properties and system resource management mechanisms to maintain them. The property specifications are translated automatically by the system into an actual implementation with the desired behavior. We have designed and implemented an Infopipe specification language, an XML-level intermediate language, and translators from a these specifications to executable code on a number of platforms, including C/sockets, CORBA AVStreams, Java RMI, and the Echo publish/subscribe middleware. Experimental measurements show a small additional overhead of automatically generated code compared to manually written code, but significant gains in portability, maintainability, and evolutionary development of both software tools and the code they generate.
Calton Pu was born in Taiwan, but grew up in Brazil. He received his PhD from University of Washington in 1986 and served on the faculty of Columbia University and Oregon Graduate Institute. Currently, he is holding the position of Professor and John P. Imlay, Jr. Chair in Software at the College of Computing, Georgia Institute of Technology. He is leading the Infosphere project, building software tools to support information flow-driven applications such as digital libraries and electronic commerce. Infosphere builds on his previous and ongoing research interests. First, he has been working on next-generation operating system kernels to achieve high performance, adaptiveness, security, and modularity, using program specialization, software feedback, and domain-specific languages. This area has included projects such as Synthetix, Immunix, Microlanguages, and Microfeedback, applied to distributed multimedia and system survivability. Second, he has been working on new data and transaction management by extending database technology. This area has included projects such as Epsilon Serializability, Reflective Transaction Framework, and Continual Queries over the Internet. His collaborations include applications of these techniques in scientific research on macromolecular structure data, weather data, and environmental data, as well as in industrial settings. He has published more than 40 journal papers and book chapters, 130 conference and refereed workshop papers, and served on more than 70 program committees, including the co-PC chair of SRDS'95, co-general chair of ICDE'97, co-PC chair of ICDE'99, general chair of CIKM'01, co-PC chair of COOPIS'02, and co-PC chair of SRDS'03. He is currently an associate editor of DAPD, IJODL, and WWWJ. His research is currently funded by NSF, DARPA, Intel, HP, and other sources.
“Consistent Security Policy Enforcement in Decentralized Systems” – Abstract:
Security requirements for a system may be represented symbolically as a policy specification. This enables mechanical translation of the policy into a set of enforcement actions, eliminating many steps at which human error can creep in. As the ``semantic gap'' between high-level (and global) policies and low-level (and highly localized) enforcement actions seems particularly large, we believe that a good choice of abstraction coupled to a set of translation tools can have significant operational impact on system security. This impact is particularly strong for complex systems such as those constructed from decentralized components. Our approach is a policy enforcement programming language (PEPL) allowing general purpose programming of policies at a high level of abstraction, or perhaps more interestingly, the use of abstract PEPL objects to erect domain-specific policy specification infrastructures. In this talk I will give an overview of our proposed security architecture. I will describe the policy language, that enables high-level security policy specification, the adaptation layer supporting the policy-to-enforcement mapping, and the runtime support for policy coordination in a decentralized computing environment.
Sotiris Ioannidis is a Ph.D. candidate at the CIS Department, University of Pennsylvania, USA. He received his B.S. degree in Mathematics and his M.S. degree in Computer Science from the University of Crete, Greece, in 1994 and 1996 respectively. He also holds a M.S. degree in Computer Science from the University of Rochester, USA. His main research areas are in operating system and network security. Specifically, security policy languages, intrusion detection and prevention, and cryptography.
“From Simulink to Lustre to TTA: a layered approach for distributed embedded applications” – Abstract:
We present a layered end–to–end approach for the design and implementation of embedded software on a distributed platform. The approach comprises a high–level modeling and simulation layer (Mathworks’ Simulink), a middle–level programming and validation layer (the synchronous language Lustre) and a low–level execution layer (Time Triggered Architecture or TTA). We describe algorithms and tools to pass from one layer to the next: a translator from Simulink to Lustre, a set of real–time and code–distribution extensions to Lustre, and implementation tools for decomposing a Lustre program into tasks, scheduling the tasks as a multi–period multi–processor scheduling problem and distributing the tasks on the execution platform along with the necessary “glue” code.