publication
Authored by Douglas Schmidt
publication
The problem of dispatching emergency responders to service traffic accidents, fire, distress calls and crimes plagues urban areas across the globe. While such problems have been extensively looked at, most approaches are offline. Such methodologies fail to capture the dynamically changing environments under which critical emergency response occurs, and therefore, fail to be implemented in practice. Any holistic approach towards creating a pipeline for effective emergency response must also look at other challenges that it subsumes - predicting when and where incidents happen and understanding the changing environmental dynamics. We describe a system that collectively deals with all these problems in an online manner, meaning that the models get updated with streaming data sources. We highlight why such an approach is crucial to the effectiveness of emergency response, and present an algorithmic framework that can compute promising actions for a given decision-theoretic model for responder dispatch. We argue that carefully crafted heuristic measures can balance the trade-off between computational time and the quality of solutions achieved and highlight why such an approach is more scalable and tractable than traditional approaches. We also present an online mechanism for incident prediction, as well as an approach based on recurrent neural networks for learning and predicting environmental features that affect responder dispatch. We compare our methodology with prior state-of-the-art and existing dispatch strategies in the field, which show that our approach results in a reduction in response time with a drastic reduction in computational time.
Authored by Ayan Mukhopadhyay, Geoffrey Pettet, Chinmaya Samal, Abhishek Dubey, and Yevgeniy Vorobeychik
publication
Authored by Bradley Potteiger, Zhenkai Zhang, and Xenofon Koutsoukos
publication
The adoption of blockchain based distributed ledgers is growing fast due to their ability to provide reliability, integrity, and auditability without trusted entities. One of the key capabilities of these emerging platforms is the ability to create self-enforcing smart contracts. However, the development of smart contracts has proven to be error-prone in practice, and as a result, contracts deployed on public platforms are often riddled with security vulnerabilities. This issue is exacerbated by the design of these platforms, which forbids updating contract code and rolling back malicious transactions. In light of this, it is crucial to ensure that a smart contract is secure before deploying it and trusting it with significant amounts of cryptocurrency. To this end, we introduce the VeriSolid framework for the formal verification of contracts that are specified using a transition-system based model with rigorous operational semantics. Our model-based approach allows developers to reason about and verify contract behavior at a high level of abstraction. VeriSolid allows the generation of Solidity code from the verified models, which enables the correct-by-design development of smart contracts
Authored by Anastasia Mavridou, Aron Laszka, Emmanouela Stachtiari, and Abhishek Dubey
publication
Distributed, co-existing applications found in the military and space domains, which operate over managed but shared computing resources at the edge require strong isolation from each other. The state of the art for computation sharing at the edge is traditionally based on Docker and similar pseudo-virtualization features. Our team has been working on an end-to-end architecture that provides strong spatial and temporal isolation similar to what has become standard in avionics communities. In this paper we describe an open-source extension to Linux that we have designed and implemented for our distributed real-time embedded managed systems (DREMS) architecture. The key concepts are the partitioning scheduler, strong security design and a health management interface
Authored by Abhishek Dubey, William Emfinger, Aniruddha Gokhale, Pranav Kumar, Dan McDermet, Ted Bapty, and Gabor Karsai
publication
Traffic networks are one of the most critical infrastructures for any community. The increasing integration of smart and connected sensors in traffic networks provides researchers with unique opportunities to study the dynamics of this critical community infrastructure. Our focus in this paper is on the failure dynamics of traffic networks. By failure, we mean in this domain the hindrance of the normal operation of a traffic network due to cyber anomalies or physical incidents that cause cascaded congestion throughout the network. We are specifically interested in analyzing the cascade effects of traffic congestion caused by physical incidents, focusing on developing mechanisms to isolate and identify the source of a congestion. To analyze failure propagation, it is crucial to develop (a) monitors that can identify an anomaly and (b) a model to capture the dynamics of anomaly propagation. In this paper, we use real traffic data from Nashville, TN to demonstrate a novel anomaly detector and a Timed Failure Propagation Graph based diagnostics mechanism. Our novelty lies in the ability to capture the the spatial information and the interconnections of the traffic network as well as the use of recurrent neural network architectures to learn and predict the operation of a graph edge as a function of its immediate peers, including both incoming and outgoing branches. Our results show that our LSTM-based traffic-speed predictors attain an average mean squared error of $6.55\times10^{-4}$ on predicting normalized traffic speed, while Gaussian Process Regression based predictors attain a much higher average mean squared error of $1.78\times10^{-2}$. We are also able to detect anomalies with high precision and recall, resulting in an AUC (Area Under Curve) of 0.8507 for the precision-recall curve. To study physical traffic incidents, we augment the real data with simulated data generated using SUMO, a traffic simulator. Finally, we analyzed the cascading effect of the congestion propagation by formulating the problem as a Timed Failure Propagation Graph, which led us in identifying the source of a failure/congestion accurately.
Authored by Sanchita Basak, Afiya Aman, Aron Laszka, Abhishek Dubey, and Bruno Leao
publication
Authored by Charles Hartsell, Nagabhushan Mahadevan, Shreyas Ramakrishna, Abhishek Dubey, Theodore Bapty, Taylor Johnson, Xenofon Kousoukos, Janos Szipanovits, and Gabor Karsai
publication
Bus transit systems are the backbone of public transportation in the United States. An important indicator of the quality of service in such infrastructures is on-time performance at stops, with published transit schedules playing an integral role governing the level of success of the service. However there are relatively few optimization architectures leveraging stochastic search that focus on optimizing bus timetables with the objective of maximizing probability of bus arrivals at timepoints with delays within desired on-time ranges. In addition to this, there is a lack of substantial research considering monthly and seasonal variations of delay patterns integrated with such optimization strategies. To address these, this paper makes the following contributions to the corpus of studies on transit on-time performance optimization: (a) an unsupervised clustering mechanism is presented which groups months with similar seasonal delay patterns, (b) the problem is formulated as a single-objective optimization task and a greedy algorithm, a genetic algorithm (GA) as well as a particle swarm optimization (PSO) algorithm are employed to solve it, (c) a detailed discussion on empirical results comparing the algorithms are provided and sensitivity analysis on hyper-parameters of the heuristics are presented along with execution times, which will help practitioners looking at similar problems. The analyses conducted are insightful in the local context of improving public transit scheduling in the Nashville metro region as well as informative from a global perspective as an elaborate case study which builds upon the growing corpus of empirical studies using nature-inspired approaches to transit schedule optimization.
Authored by Sanchita Basak, Fangzhou Sun, Saptarshi Sengupta, and Abhishek Dubey
publication
Services hosted in multi-tenant cloud platforms often encounter performance interference due to contention for non-partitionable resources, which in turn causes unpredictable behavior and degradation in application performance. To grapple with these problems and to define effective resource management solutions for their services, providers often must expend significant efforts and incur prohibitive costs in developing performance models of their services under a variety of interference scenarios on different hardware. This is a hard problem due to the wide range of the possible co-located services and their workloads, and the growing heterogeneity in the runtime platforms including the use of fog and edge-based resources, not to mention the accidental complexities in conducting such application profiling under a variety of scenarios. To address these challenges, we present FECBench (Fog/Edge/Cloud Benchmarking), which is an open source framework comprising a set of 106 applications covering a wide range of application classes that guides providers in building performance interference prediction models for their services without incurring undue costs and efforts via the following contributions. First, we define a technique to build resource stressors that can stress multiple system resources all at once in a controlled manner, which help to gain insights into the impact of interference on the application’s performance. Second, to overcome the need for exhaustive application profiling, FECBench intelligently uses the design of experiments (DoE) approach to enable users to build surrogate performance models of their services. Third, FECBench maintains an extensible knowledge base of application combinations that create resource stress across the multi-dimensional resources design space. Empirical results using real-world scenarios to validate the efficacy of FECBench shows that the predicted application performance using the surrogate models incurs a median error of only 7.6 percent across all tests, with 5.4 percent in the best case and 13.5 percent in the worst case.
Authored by Yogesh Barve, Shashank Shekhar, Ajay Chhokra, Shweta Khare, Zhuangwei Kang, Anirban Bhattacharjee, Hongyang Sun, and Aniruddha Gokhale
publication
Authored by Anirban Bhattacharjee, Yogesh Barve, Shweta Khare, Shunxing Bao, and Aniruddha Gokhale
publication
An effective real-time estimation of the travel time for vehicles, using AVL(Automatic Vehicle Locators) has added a new dimension to the smart city planning. In this paper, we used data collected over several months from a transit agency and show how this data can be potentially used to learn patterns of travel time during specially planned events like NFL (National Football League) games and music award ceremonies. The impact of NFL games along with consideration of other factors like weather, traffic condition, distance is discussed with their relative importance to the prediction of travel time. Statistical learning models are used to predict travel time and subsequently assess the cascading effects of delay. The model performance is determined based on its predictive accuracy according to the out-of-sample error. In addition, the models help identify the most significant variables that influence the delay in the transit system. In order to compare the actual and predicted travel time for days having special events, heat maps are generated showing the delay impacts in different time windows between two timepoint-segments in comparison to a non-game day. This work focuses on the prediction and visualization of the delay in the public transit system and the analysis of its cascading effects on the entire transportation network. According to the study results, we are able to explain more than 80\% of the variance in the bus travel time at each segment and can make future travel predictions during planned events with an out-of-sample error of 2.0 minutes using information on the bus schedule, traffic, weather, and scheduled events. According to the variable importance analysis, traffic information is most significant in predicting the delay in the transit system.
Authored by Aparna Oruganti, Sanchita Basak, Fangzhou Sun, Hiba Baroud, and Abhishek Dubey
publication
The rise in deep learning models in recent years has led to various innovative solutions for intelligent transportation technologies. While some prediction models focus on predicting the state of network efficiently and accurately, such as estimating traffic congestion, transit delay and so on, other models use those predicted states to find a set of sequential decisions that commuters need to make to travel from their origin to destination. The performance of these models is often evaluated using prediction accuracy. There is a growing need to understand the overall impact of such models on the societal scale. In this paper, we leverage MATSim, an agent-based simulation framework, to incorporate various decision-making models and provide a standardized environment to evaluate the efficacy of these models in terms of its system impact. For example, we describe the integration of a model that captures the altruistic behavior of an agent in addition to the disutility of a user proportional to the travel time and cost. This model can then be used to evaluate the sensitivity of an agent to the system disutility and the monetary incentives given by the transportation authority of the city. We show the effectiveness of the approach and provide the analysis using a case study from the Metropolitan Nashville area.
Authored by Chinmaya Samal, Abhishek Dubey, and Lillian Ratliff
publication
Authored by Zhenkai Zhang, Zihao Zhan, Daniel Balasubramanian, Bo Li, Peter Volgyesi, and Xenofon Kousoukos
publication
Technological advancements in today’s electrical grids give rise to new vulnerabilities and increase the potential attack surface for cyber-attacks that can severely affect the resilience of the grid. Cyber-attacks are increasing both in number as well as sophistication and these attacks can be strategically organized in chronological order (dynamic attacks), where they can be instantiated at different time instants. The chronological order of attacks enables us to uncover those attack combinations that can cause severe system damage but this concept remained unexplored due to the lack of dynamic attack models. Motivated by the idea, we consider a game-theoretic approach to design a new attacker-defender model for power systems. Here, the attacker can strategically identify the chronological order in which the critical substations and their protection assemblies can be attacked in order to maximize the overall system damage. However, the defender can intelligently identify the critical substations to protect such that the system damage can be minimized. We apply the developed algorithms to the IEEE-39 and 57 bus systems with finite attacker/defender budgets. Our results show the effectiveness of these models in improving the system resilience under dynamic attacks.
Authored by Saqib Hasan, Abhishek Dubey, Gabor Karsai, and Xenofon Koutsoukos
publication
Transactive energy systems (TES) are emerging as a transformative solution for the problems that distribution system operators face due to an increase in the use of distributed energy resources and rapid growth in scalability of managing active distribution system (ADS). On the one hand, these changes pose a decentralized power system controls problem, requiring strategic control to maintain reliability and resiliency for the community and for the utility. On the other hand, they require robust financial markets while allowing participation from diverse prosumers. To support computing requirements of TES with required flexibility while preserving privacy and security, a distributed software platforms is required. In this paper, we enable the study and analysis of security concerns by developing Transactive Energy Security Simulation Testbed (TESST), a TES testbed for simulating various cyber attacks. In this work, the testbed is used for TES simulation with centralized clearing market, highlighting weaknesses in a centralized system. Additionally, we present a blockchain enabled decentralized market solution supported by distributed computing for TES, which on one hand can alleviate some of the problems we identify, but on the other hand may introduce newer issues. Future study of these differing paradigms is necessary and will continue as we develop our security simulation testbed.
Authored by Yue Zhang, Scott Eisele, Abhishek Dubey, Aron Laszka, and Anurag Srivastava
publication
Authored by Sanchita Basak, Abhishek Dubey, and Bruno Leao
publication
The Internet of Things (IoT) requires distributed, large scale data collection via geographically distributed devices. While IoT devices typically send data to the cloud for processing, this is problematic for bandwidth constrained applications. Fog and edge computing (processing data near where it is gathered, and sending only results to the cloud) has become more popular, as it lowers network overhead and latency. Edge computing often uses devices with low computational capacity, therefore service frameworks and middleware are needed to efficiently compose services. While many frameworks use a top-down perspective, quality of service is an emergent property of the entire system and often requires a bottom up approach. We define services as multi-modal, allowing resource and performance tradeoffs. Different modes can be composed to meet an application's high level goal, which is modeled as a function. We examine a case study for counting vehicle traffic through intersections in Nashville. We apply object detection and tracking to video of the intersection, which must be performed at the edge due to privacy and bandwidth constraints. We explore the hardware and software architectures, and identify the various modes. This paper lays the foundation to formulate the online optimization problem presented by the system which makes tradeoffs between the quantity of services and their quality constrained by available resources.
Authored by Geoffrey Pettet, Saroj Sahoo, and Abhishek Dubey
publication
Simulation-based analysis is essential in the model-based design process of Cyber-Physical Systems (CPS). Since heterogeneity is inherent to CPS, virtual prototyping of CPS designs and the simulation of their behavior in various environments typically involve a number of physical and computation/communication domains interacting with each other. Affordability of the model-based design process makes the use of existing domain-specific modeling and simulation tools all but mandatory. However, this pressure establishes the requirement for integrating the domain-specific models and simulators into a semantically consistent and efficient system-of-system simulation. The focus of the paper is the interoperability of popular integration platforms supporting heterogeneous multi-model simulations. We examine the relationship among three existing platforms: the High-Level Architecture (HLA)-based CPS Wind Tunnel (CPSWT), mosaik, and the Functional Mockup Unit (FMU). We discuss approaches to establish interoperability and present results of ongoing work in the context of an example.
Authored by Himanshu Neema, Janos Sztipanovits, Cornelius Steinbrink, Thomas Raub, Bastian Cornelsen, and Sebastian Lehnhoff
publication
Authored by Bradley Potteiger, Hamzah Abdel-Aziz, Himanshu Neema, and Xenofon Koutsoukos
publication
Authored by Gabor Karsai, Xenofon Kousoukos, Himanshu Neema, Peter Volgyesi, and Sztipanovits Janos
publication
Authored by Janos Sztipanovits, Theodore Bapty, Ethan Jackson, Xenofon Koutsoukos, Zsolt Lattmann, and Sandeep Neema
publication
The International Space Station (ISS) plans to launch 100+ small-sat missions for different science experiments in the next coming years. At present these missions are limited to couple of months but in the future these will last longer and it becomes crucial to monitor and predict future health of these systems as they age to prolong the usage time. This paper describes a hierarchical architecture which combines data-driven anomaly detection methods with a fine-grained model based diagnosis and prognostics architecture. At the core of the architecture is a distributed stack of deep neural network that detects and classifies the data traces from nearby satellites based on prior observations. Any identified anomaly is transmitted to the ground, which then uses model-based diagnosis and prognosis methods. In parallel, periodically the data traces from the satellites are transported to the ground and analyzed using model-based techniques. This data is then used to train the neural networks, which are run from ground systems and periodically updated. This collaborative architecture enables quick data-driven inference on the satellite and more intensive analysis on the ground where often time and power consumption are not key concerns. We demonstrate this architecture through an initial battery data set. In the future we propose to apply this framework to other electric and electronic components onboard the small satellites.
Authored by Fangzhou Sun, Abhishek Dubey, Chetan Kulkarni, and A Guarneros
publication
NIST, in collaboration with Vanderbilt University, has assembled an open-source tool set for designing and implementing federated, collaborative and interactive experiments with cyber-physical systems (CPS). These capabilities are used in our research on CPS at scale for Smart Grid, Smart Transportation, IoT and Smart Cities. This tool set, "Universal CPS Environment for Federation (UCEF)," includes a virtual machine (VM) to house the development environment, a graphical experiment designer, a model repository, and an initial set of integrated tools including the ability to compose Java, C++, MATLABTM, OMNeT++, GridLAB-D, and LabVIEWTM based federates into consolidated experiments. The experiments themselves are orchestrated using a ‘federation manager federate,’ and progressed using courses of action (COA) experiment descriptions. UCEF utilizes a method of uniformly wrapping federates into a federation. The UCEF VM is an integrated toolset for creating and running these experiments and uses High Level Architecture (HLA) Evolved to facilitate the underlying messaging and experiment orchestration. Our paper introduces the requirements and implementation of the UCEF technology and indicates how we intend to use it in CPS Measurement Science.
Authored by Martin Burns, Thomas Roth, Edward Griffor, Paul Boynton, Sztipanovits Janos, and Himanshu Neema
publication
Systems-of-Systems (SoS) are composed of several interacting and interdependent systems that necessitate the integration of complex, heterogeneous models that represent the ensemble from different points of view, such as organizational workflows, cyber infrastructure, and various engineering or physical domains. These models are complex and require different dynamic simulators to compute their behavior over time. Thus, evaluation of SoS as-a-whole necessitates integration of these heterogeneous simulators. This is highly challenging because it requires integrating both the heterogeneous system models with different semantics and concepts from different system domains (physical, computational, or human), and the heterogeneous system simulators that use different time-stepping and event handling methods. Further, real-world SoS simulation and experimentation requires a comprehensive framework for integration modeling, efficient model and system composition, parametric experiments, run-time deployment, simulation control, scenario-based experimentation, and system analysis. This dissertation presents a model-based integration approach for integrating large-scale heterogeneous simulations. The approach is illustrated by developing a generic simulation integration and experimentation framework called the Command and Control Wind Tunnel (C2WT). It allows modeling systems with their interdependencies as well as connecting and relating the corresponding heterogeneous simulators in a logically and temporally coherent manner. Its generalizable methods and tools enable rapid synthesis of industry standards based integrated simulations. For real-world integrated simulation experiments, several novel techniques are presented such as mapping methods for integrating legacy components that cannot directly interface with SoS-level data models, a generic cyber communication network simulation component that can be reused for different SoSs, a reusable cyber-attack library for evaluating SoS’ security and resilience against cyber threats, and modeling and orchestration of alternative what-if scenarios for SoS evaluations. Further, for efficient simulation of complex dynamical models that exhibit different rate dynamics in different parts, a partitioning method is developed to split them into different sampling rate groups. In addition, a novel approach is presented for ontology based model composition. In-depth case studies are also provided to demonstrate the effectiveness of the overall integration approach. https://etd.library.vanderbilt.edu/available/etd-01172018-232437/unrestricted/Neema.pdf
Authored by Himanshu Neema
publication
Reliable operation of power systems is a primary challenge for the system operators. With the advancement in technology and grid automation, power systems are becoming more vulnerable to cyber-attacks. The main goal of adversaries is to take advantage of these vulnerabilities and destabilize the system. This paper describes a game-theoretic approach to attacker / defender modeling in power systems. In our models, the attacker can strategically identify the subset of substations that maximize damage when compromised. However, the defender can identify the critical subset of substations to protect in order to minimize the damage when an attacker launches a cyber-attack. The algorithms for these models are applied to the standard IEEE-14, 39, and 57 bus examples to identify the critical set of substations given an attacker and a defender budget.
Authored by Saqib Hasan, Amin Ghafouri, Abhishek Dubey, Gabor Karsai, and Xenofon Kousoukos