My general area of research is networking, or more generally the set of issues that arise in systems that allow a multiplicity of individuals and devices to interact. Those issues are both in understanding how to build the best possible infrastructure to support those interactions, and in exploring new functionality its availability enables and how this functionality in turn may affect the development of the infrastructure itself.
The first set of issues span “traditional” networking topics such routing, traffic engineering, network optimization, scheduling, etc., while the second deal with broader issues that reflect the opportunities and challenges in exploiting a ubiquitous communication infrastructure. One example of such topics is a recent project on “network economics” (see “Current Projects” for details) that investigates how various economic factors influence the use and adoption of new network technologies.
A common theme in many of the projects I am involved in is to seek mechanisms or solutions that preferably err on the side of simplicity rather than optimality. This arguable bias is to some extent rooted in lessons learned from many years investigating network QoS, based on which I reached the conclusion that in many cases the cost of implementing optimal solutions makes them either infeasible or more expensive than the resources they manage to save. Don’t get me wrong, optimal solutions are critical as benchmarks that allow us to gauge how good a job we are doing, and they also often provide the fundamental insight needed to realize a good, practical solution. However, it is important to realize that it is often necessary to go beyond them to affect real systems, as well as remain aware that in many cases we are simply optimizing for the wrong metrics (the ones that map into problems we know how to solve…)
One manifestation of my interests in “simple” solutions is a set of activities under the broad umbrella of “Robust Networking,” of which one example is in developing approaches to leverage the diversity inherent to large-scale networks such as the Internet, and use it to improve resiliency to the many unavoidable impairments that continuously plague such large, distributed systems (see the presentation “Size Does Matter! From the Age of Closed-Loop to the Age of Open-Loop” given at NeXtworking’07 – 2nd COST-NSF Workshop on Future Internet, April 2007, Berlin, Germany, together with the accompanying one-page abstract for additional details).