GeneriCon 2023Join us in Denver from June 7 – 9 to see what’s coming next.

Register now

Atul Lal

Operator placement on edge systems in stream processing

Cover Image for Operator placement on edge systems in stream processing
Atul Lal
Atul Lal
flink-placement repository image

flink-placement

Developing a cost model and modifying the Apache Flink scheduler to efficiently offload tasks to edge systems, ultimately improving latency problems over the WAN.

Language:Java
Topics:apache-flinkflinkflink-stream-processing
Check this project out on GitHub

This article discusses a project that aims to enhance stream processing systems by offloading processing tasks to edge systems, specifically Raspberry Pi devices, to reduce data traffic and latency overhead caused by the limited bandwidth of Wide Area Networks (WANs). The proposed solution involves developing a cost model and modifying the Apache Flink scheduler to efficiently offload tasks to edge systems, ultimately improving latency problems over the WAN.

Introduction

Stream processing systems are crucial for handling real-time data, but they often face challenges related to limited resources and strict latency requirements. To address these issues, this project proposes a heterogeneity-aware operator placement algorithm for stream processing systems that offloads tasks to edge systems, specifically Raspberry Pi devices, to optimize resource utilization and minimize latency overhead.

Problem Statement

Traditional stream processing systems like Flink are designed for homogeneous data center servers, making them unsuitable for automatically offloading tasks to edge systems. This limitation results in high latency and inefficient resource utilization when streaming applications ingest data from multiple sources across different locations via a WAN with limited bandwidth and high latency.

Proposed Solution

The proposed solution involves preprocessing data streams at the edge to reduce data traffic and latency overhead over the WAN. This can be achieved by identifying tasks that can be offloaded to edge systems using performance metrics, and implementing a dynamic mechanism to automatically predict data stream flow, compute metrics, and decide which tasks can be offloaded to increase efficiency. The project will extract performance metrics like backPressureTimeMsPerSecond, idleTimeMsPerSecond, busyTimeMsPerSecond, and numRecordsOutPerSecond to calculate the maximum output rate an instance can sustain for every task. Using this information, the Flink scheduler will be modified to offload tasks to edge systems.

Expectations

The project aims to implement a prototype that can offload tasks to edge systems, reducing data traffic and latency overhead caused by limited WAN bandwidth. The system is expected to efficiently utilize edge resources, minimize server-side resource usage, and improve system efficiency without sacrificing performance.

Experimental Plan

The experimental setup for evaluating the solution will include a cloud server/local server, Raspberry Pi 4B, Apache Flink, Python, and Java. The scheduling algorithm will begin by running all tasks on the server side (except the source) and collecting performance metrics from Flink. Lightweight operators will then be placed on available edge slots. The initial prototype will be built using Flink and Raspberry Pi, with tests conducted on open-source datasets to identify suitable tasks for offloading to edge systems.

Success Indicators

The project will be considered successful if it can demonstrate improved performance compared to conventional approaches. Major milestones include making Flink compatible with heterogeneous resources, implementing a cost model to determine which operators can be offloaded to edge systems, and changing the placement configuration based on the cost model output.

Conclusion

The proposed heterogeneity-aware operator placement algorithm aims to improve the performance of stream processing systems by offloading tasks to edge systems, specifically Raspberry Pi devices. By doing so, it seeks to minimize latency overhead and resource utilization, providing a more efficient solution for processing real-time data streams.

Image of Atul Lal

About Atul Lal

I am a software engineer with a passion for creating innovative and impactful applications that solve real-world problems. At Commvault Systems, I optimized APIs, developed distributed systems, and automated cloud environments for over two years.