Dr. David Bakken’s Bio Blurb & Current Seminar Abstracts
Fall 2025
Note: hosts should feel free to condense any verbiage below to meet requirements for length/space. I’m not super picky, so you don’t need to send it back to me to approve your edits.
Contents
1. GridStat (60 or 90 minutes)
2. Grid Decentralization (60, 90, or 120 minutes)
3. Distributed Computing Overview for Engineers (60 or
90 minutes)
4. Distributed Coordination (60 or 90 minutes)
Dr. David Bakken is Professor Emeritus of Computer Science at Washington State University. His research interests include wide-area middleware and dependable computing. Since 1999, he has worked very closely with WSU’s US top-3 electric power research program on developing better data delivery for the bulk (wide-area) power grid and is considered the leading expert on this. Before 1999 he was a research scientist at BBN, the research lab that build the first internet (the ARPANET) in 1969 and researched the first middleware starting in 1979. The QuO middleware framework for WAN QoS and adaptation has flown in Boeing experimental aircraft, been cited close to 2000 times, and been used in multiple DARPA programs. Dr. Bakken chaired a panel on cloud computing for the grid at IEEE Innovative Smart Grid Technologies (ISGT) in 2014 and on Edge computing in 2018. He organized a new workshop, Trustworthiness of Smart Grids (ToSG), at DSN 2014, the premiere international conference on dependable computing; the next iteration was at ISGT 2015. For more info see tosg-workshop.org
Note: seminars’ times below include questions. But if there is more time I can certainly give more details, ask more thought-provoking questions, lead a discussion, etc.
Also, these seminars are targeted to a mixed power engineering & computer science audience. I can give different versions to both power engineers and computer scientists. In fact, I did this (both on the same days) at both Georgia Institute of Technology and Arizona State University in the past.
ADD note a little overlap …
Title: Towards More Effective and Resilient Power Apps Exploiting Modern Communications and Computational Resources
Abstract: Power grids are unique in that supply and demand must be balanced in real time over a few thousand kilometers or more. These grids are getting increasingly stressed and are relying more heavily on power application programs with extreme requirements on the communication system. This seminar first describes the questions that power researchers need to ask themselves to exploit the much better communications and computation (including the cloud) that has been available for the last decade or two. It then overviews these applications and their requirements and appropriate implementation tradeoffs, which are quite different from any other domain and the internet at large. Next the GridStat publish-subscribe system is described, and its grid-appropriate security mechanisms explained.
Title: Grid Decentralization: Trends, Challenges, and Solutions
Abstract: It has long been well-known that power grids are getting more stressed due to many factors including inadequate transmission line growth, renewables that don’t provide rotational inertia, and larger sets of system variables. The decentralization of the last decade or more is adding more stress due to factors that include prosumers and semi-independent microgrids. Indeed, the speed that disturbances travel has gone from 100 miles/hour in 2000 to 1000 miles/second in 2025, a speedup of over 2 orders of magnitude. In this seminar, I will overview these issues and additional challenges that decentralization imposes on grids, including local-only control will then discuss using the distributed computing concept of cooperating groups and how they can be (re)formed with physical, cyber, or cyber-physical criteria. I then will overview 3 case studies: decentralized linear state estimation, decentralized remedial action schemes (RAS), and decentralized voltage stability. I then will overview distributed consensus/agreement that has been researched by distributed computing researchers for almost 50 years and motivate its need for decentralized power grids (the details of which are an entire other seminar).
Title: A Glimpse of Distributed Computing
Abstract: Computer networking gets bytes of data from Point A to Point B with some statistical properties (delay, bandwidth, drop rate, …). Distributed computing (DC) evolved in the late 1970s to figure out how this: how, then, do we best use networks to program distributed application programs? I.e., how do we replicate, synchronize, coordinate, send data structures between computers, etc. From this arose middleware, consensus/agreement protocols, and many other distributed algorithms.
In this seminar I will give the attendee a good sense of what distributed computing is (I note that I am an applied DC researcher, not a theory-only one which is what 80% or more of them are). First, I will quickly overview the history of distributed computing, to give it context. I then will introduce basic DC concepts and explain the difference between a local call (to a procedure in the same address space) versus a remote one. We will also touch on how heterogeneity (diversity of resource types: CPUs, network technologies, programming languages, operating systems) that is inherent to DC. I will then explain middleware, software that handles many difficulties of programming a DC and has been considered best practices in virtually every industry but the power sector since the 1990s or earlier. I will then explain how middleware is often used not for a green field opportunity -- creating services and apps from scratch -- but rather to integrate legacy systems. I will then compare programming with middleware to the only other option for DC: programming using the network socket interface. I then will conclude with discussion and questions.
Title: Distributed Coordination (If Secure and Smart) Enables the IoT
The Internet of Things (IoT) movement posits a pervasive ensemble of formerly dumb and unconnected devices enabled not only with communications but also with (presumed) intelligence. Our society is rapidly moving in this direction, for better or worse. In many IoT application domains, and also electric power, it is not just communication per se but also coordination that is fundamental. Unfortunately, domain specialists in almost every domain have zero background in, or even know of the existence, of distributed computing, let alone distributed coordination. Worse, most distributed coordination papers (since 1979) have mostly seemingly never been implemented: they were written by theory-only professors writing to convince other theory-only reviewers of the properties of their algorithms, not giving pseudocode. They are thus utterly inaccessible to IoT practitioners. In this presentation, we will overview coordination challenges inherent in the IoT vision. We will deeply overview such challenges in the electric power grid, and overview them in other domains, including UAVs and connected vehicles. We then describe how platform support for such coordination can be an enabling technology for IoT; we call this IoT-Coord. We then discuss how such platform support must inherently support AI/ML plugins and domain/application-specific security and a managed runtime system.
Title: Quality Objects: Middleware-Level Multi-Dimensional QoS for Adaptive WAN Apps
Abstract: Quality of Service (QoS) deals with the issues of performance, available, and security that are outside of a program or service’s “business logic”. This seminar overviews the Quality Objects (QuO) middleware, developed at BBN Technologies from 1995 until the mid-2000s. QuO handles QoS issues at the middleware layer, and supports “reserving” network bandwidth and replicas to enhance performance and availability, respectively. It does so in a way that allows applications to adapt to changing resource availability, cyber attacks, etc.
QuO had approx. 60-70 person-years of BBN labor invested in it, mainly by DARPA, and about 3 times that money invested in research collaborators who used it, including Georgia Tech, University of Illinois, Cornell University, Washington University in St. Louis, Columbia University, Trusted Information Systems, Honeywell Labs, Boeing Phantom Works, and others. It has flown in Boeing experimental aircraft; developed and used in 7 DARPA ITO and ISO QuO CONTRACTS, was evaluated for use with UAVs (drones); etc.. QuO was used in a demo for the US Navy (SPAWAR) to integrated 7 QoS-related technologies from 6-7 organizations, this scale was unheard of, but is precisely what QuO was designed to do.
Indeed, helps system builders create adaptive and resilient distributed applications and services. To do this, it integrates into a coherent framework many QoS-related mechanisms and meta-data in a coherent extensible framework, including QoS mechanisms (bandwidth reservation, replica management), adaptive application behaviors, and many more. This allows system builder to help master the inherent complexity of such complex systems. QuO also extracts simplicity out of this complexity by supporting mini languages (which generate middleware code) to specify QoS contracts, delegates (QuO proxies/stubs), runtime initialization, adaptive behavior, and other facets. These languages are in the spirit of Aspect-Oriented Programming.
Note: This is very advanced distributed computing: when teaching an advanced graduate class on this subject, I ALWAYS start out with this, because it identifies so many issues in the context of a well-thought-out framework. Indeed, a leading QoS researcher at Illinois told me that when she teaches a graduate class on QoS, she always starts with QuO. But, unless someone has taken a class in distributed computing, or at least attended my one-day distributed systems bootcamp, they are likely to get not much out of this lecture in specific, but they will gain a good sense of the scope of the complexities in wide-area distributed computing and complex infrastructures to help monitor and manage them.