A new paradigm for inter-app communications


Event-driven architectures (EDAs) have become a cornerstone in the design and development of highly scalable, decoupled, and extensible systems. From small-scale applications to massive enterprise ecosystems, EDAs are the default du jour for asynchronous events and data flow. But what comes next?

In this article, we’ll argue that while capable of meeting the challenges of Internet scale, much of the work that goes in to implementing and maintaining Event-Driven systems is wasted on code dedicated to understanding, reconciling, and distributing messages, rather than on user-facing features that enhance value.

Historical Context

The Monolithic Era

Before the proliferation of microservices and event-driven systems, monolithic architectures were the go-to design for software applications. Monolithic systems are easy to develop, test, and deploy, but they have inherent limitations. As applications grow, the complexity of managing state, data, and scalability becomes increasingly challenging.

The Need for Scalability and Decoupling

With the advent of the Internet and the exponential increase in users, data, and processes, it became crucial to build systems that could scale easily and adapt to changes. Enter the concept of decoupling, a design principle aimed at separating the functionalities of a system into distinct, independent components.

Event-Driven Paradigm as a Solution

Event-driven architectures emerged as an answer to the dual challenge of decoupling (and multiplying) different parts of an application with the need to keep all parts (”Microservices”) in sync with each other. The solution offered by Event-Driven architectures is for each microservice to publish messages to a central queue, and for other microservices to pick up messages of relevance, interpret the message, and incorporate the message contents into the microservice’s local state.

Although Event-Driven architectures provide a way to decouple various parts of an application such that those parts can communicate asynchronously, each microservice or application needs to contain logic for understanding events so that each microservice can itself independently replay the state of the system and hope to arrive at the correct state.

Watershed Moments

Publish-Subscribe Pattern

The Pub-Sub pattern can be considered one of the most significant milestones in the adoption of event-driven architectures. With Pub-Sub, systems could publish messages (events) to particular channels or topics, and any number of subscribers could receive these events asynchronously. Technologies like MQTT and RabbitMQ popularized this approach.

Cloud Adoption and Serverless Architectures

The rise of cloud computing and serverless architectures fueled the adoption of EDAs. Cloud providers like AWS, Azure, and Google Cloud offer native services that support event-driven paradigms (AWS Lambda, Azure Functions, Google Cloud Functions).

Real-time Applications and IoT

The need for real-time analytics and Internet of Things (IoT) devices also spurred the widespread adoption of event-driven architectures. They allowed for immediate, asynchronous handling of events, making them ideal for these applications.

tl;dr - Benefits
  1. Scalability: One of the most prominent advantages of EDAs is their scalability. Decoupled systems can be scaled independently, allowing for more effective resource utilization. (Though as of this writing, it remains to be seen if the promise of independent scalability of services ever arrives for the majority of Enterprise use cases.)
  2. Extensibility: The decoupled nature of EDAs makes it theoretically possible to add services without affecting the entire system.
  3. Async Processing: EDAs are designed for asynchronous communication - services simply dump their messages to a central queue and downstream apps are responsible for handling that message (event).
  4. Fault Tolerance: Failure in one component doesn't necessarily bring down the entire system.

tl;dr - Drawbacks
  1. Complexity: Event-driven architectures can be hard to design, implement, and debug due to their asynchronous and distributed nature.
  2. Event Consistency: Ensuring that all components have a consistent view of all events can be challenging, especially in distributed systems.
  3. Ordering and Timing: The timing and order of events can be critical, and managing this in a large-scale system can be complicated.
  4. Monitoring and Tracing: Due to the dispersed and asynchronous nature of components, keeping track of system behavior and debugging can be difficult.

Code Complexity and Event Understanding

One of the overlooked drawbacks of event-driven architectures is the code complexity required for each application or service to understand a stream of event messages. Unlike request-response patterns where the expectations are explicit and often tightly-coupled, EDAs necessitate that each participating component comprehends the semantics, structure, and implications of incoming events. This means applications need to have built-in logic to handle various types of events, which often includes:

  • Event Parsing: Code for deserializing incoming event messages to a usable format, often JSON or XML.
  • Event Filtering: Logic to identify which events are relevant to the application.
  • Event Sequencing: In cases where the order of events matter, additional logic is required to sort or sequence events.
  • Event Handling: The actual business logic that will be executed in response to an event.
  • Error Handling: Complex logic to handle incomplete, invalid, or conflicting events.

This results in each application having a significant amount of code just dedicated to understanding and reacting to events, which can complicate development and debugging efforts.

Contrasting Event-Driven Architectures with Mycelial: The Future of Event Processing

Event-driven architectures have been a significant advancement in the way applications are built and scaled. However, they come with their own set of complexities around event ordering, state management, and data consistency, as previously discussed. The task of event processing often burdens application code with additional logic to handle these complexities. Mycelial offers a fundamentally different approach that abstracts away many of the challenges associated with event-driven systems, enabling seamless data orchestration with minimal engineering effort.

Abstraction Over Protocols

While traditional event-driven architectures may require developers to have in-depth knowledge of various messaging protocols like MQTT, RabbitMQ, or Kafka, Mycelial provides an abstraction layer across diverse inter-application communication protocols. This removes the need for developers to be protocol experts, simplifying the architecture and accelerating development.

State Reconciliation

One of the most substantial pain points in event-driven architecture is managing the state across different services. Mycelial handles device messaging and state reconciliation regardless of the protocol or network conditions. For instance, in the US Navy case study, Mycelial enabled real-time data synchronization between unmanned and manned vessels, whether they were connected or not.

Efficient Resource Utilization

Traditional event-driven tools like Kafka or Apache NiFi can be resource-intensive. Mycelial, with its minimal memory overhead and single-binary deployment, is designed for SWaP (space, weight, and power)-constrained environments. This makes it particularly useful for edge computing scenarios where resource utilization is a critical factor.

Data Prioritization and Local AI/ML Inferencing

In a typical event-driven system, implementing features like data prioritization or local AI/ML inferencing would involve substantial code complexity. Mycelial simplifies this by enabling on-device processing and intelligent data distribution based on your AI's priorities, as seen in the Walmart case study for voice-powered in-home shopping.

Simplified Maintenance and Scalability

Mycelial aims to eliminate most of the cost and complexity of maintaining Cloud infrastructure for edge applications. With features like the Mycelial Router for visual data pipeline setup and Mycelial Catalog for plug-and-play AI/ML models, the need for specialized knowledge in maintaining and scaling your architecture is dramatically reduced.


Mycelial is more than just another tool in the event-driven landscape; it is a paradigm shift. By abstracting away the intricacies commonly associated with event-driven architectures, Mycelial allows developers to focus more on the business logic and less on the underlying plumbing. As data continues to grow in volume, velocity, and variety, tools like Mycelial that simplify the complexities of data orchestration and event processing are not just convenient—they're essential.

Example: Drone Fleet Data Collection and Communication with Humvees

Let’s explore the challenges of Event-Driven Architectures in the context of a fleet of aerial drones accompanied by a group of Humvees. If we’re designing an Event-Driven or Message-Driven System, we might have events that look like the list below:

  1. DroneDeployEvent: When a drone is deployed.
  2. HumveePositionEvent: When a Humvee changes its position.
  3. DataCollectionEvent: When a drone collects significant data.
  4. ThreatAlertEvent: When a potential threat is detected.
  5. BatteryStatusEvent: When a drone's battery reaches critical levels.
  6. MissionCompletionEvent: When a mission is considered complete.

Handling these events using traditional event-driven architectures could be intricate and prone to errors. For example, what happens if a BatteryStatusEvent is processed late and a drone runs out of power mid-mission? What if a ThreatAlertEvent doesn't reach all Humvees in real-time?

Pain in Application Code

Today, most folks don’t think about the impact of all the event-handling code we have to write. We’ve gotten used to it.

But let’s take a look at all the code we would need to write for each application in our integrated Drone + Humvee solution.

If we’re using Kafka as a message queue, we would need to write a function to handle each event like so:

from kafka import KafkaConsumer
import json

def process_drone_deploy(event):
    # Logic to handle drone deployment
    print(f"Drone deployed: {event}")

def process_humvee_position(event):
    # Logic to handle humvee position
    print(f"Humvee changed position to: {event}")

def process_data_collection(event):
    # Logic to handle data collection
    print(f"Data collected: {event}")

def process_threat_alert(event):
    # Logic to handle threat alert
    print(f"Threat alert: {event}")

def process_battery_status(event):
    # Logic to handle battery status
    print(f"Battery status: {event}")

def process_mission_completion(event):
    # Logic to handle mission completion
    print(f"Mission completed: {event}")

event_processors = {
    'DroneDeployEvent': process_drone_deploy,
    'HumveePositionEvent': process_humvee_position,
    'DataCollectionEvent': process_data_collection,
    'ThreatAlertEvent': process_threat_alert,
    'BatteryStatusEvent': process_battery_status,
    'MissionCompletionEvent': process_mission_completion,

# Initialize Kafka Consumer
consumer = KafkaConsumer(
    value_deserializer=lambda x: json.loads(x.decode('utf-8'))

# Process incoming messages
for message in consumer:
    topic = message.topic
    event = message.value

    # Event ordering, validation, and any other pre-processing would happen here

    if topic in event_processors:
        print(f"Unknown event type: {topic}")
Complications in Event-Driven Architecture:

The code comments above highlight the 5 central challenges of Event-Driven Architectures:

  1. Event Ordering: The consumer needs to be aware of the event sequence. This is particularly important if one event is dependent on the outcome of another.
  2. Validation: Each message needs to be validated to ensure it meets the format and data requirements before being processed.
  3. State Management: Managing state becomes complex as you would potentially have to sync with other services or databases to make decisions based on the incoming events.
  4. Scaling: If events arrive at high velocity, you'd need to think about how to scale your consumers.
  5. Error Handling: If the processing of an event fails, you would need mechanisms to retry or roll back operations, which can get complex quickly.

Possibly the thorniest challenge is Event Ordering. Look below at the complexity of logic needed IN EACH APPLICATION to make sure that Events are valid and processed in the correct order.

from collections import deque
from datetime import datetime

# Initialize a dictionary to keep track of the last event timestamp for each drone
drone_last_event_timestamps = {}

# Initialize a queue to buffer events that are out of order
event_buffer = deque()

# Process incoming messages
for message in consumer:
    topic = message.topic
    event = message.value

    # Event Validation: Check if the event has the expected keys
    if 'drone_id' not in event or 'timestamp' not in event:
        print(f"Invalid event received in topic {topic}: {event}")

    # Event Ordering: Check the timestamp of the event to ensure events are processed in order
    drone_id = event['drone_id']
    event_timestamp = datetime.fromisoformat(event['timestamp'])

    if drone_id in drone_last_event_timestamps:
        last_timestamp = drone_last_event_timestamps[drone_id]
        if event_timestamp < last_timestamp:
            # This event is out of order; add to the buffer and continue
            event_buffer.append((topic, event))
        # This is the first event for this drone; initialize the last timestamp
        drone_last_event_timestamps[drone_id] = event_timestamp

    # Update the last timestamp for this drone
    drone_last_event_timestamps[drone_id] = max(event_timestamp, drone_last_event_timestamps[drone_id])

    # Additional Pre-processing: This is where you could enrich the event, e.g., by querying a database
    # or invoking another service.
    # ...

    if topic in event_processors:
        print(f"Unknown event type: {topic}")

    # Now go through the buffer to see if we can process any previously out-of-order events
    new_buffer = deque()
    for buffered_topic, buffered_event in event_buffer:
        buffered_drone_id = buffered_event['drone_id']
        buffered_timestamp = datetime.fromisoformat(buffered_event['timestamp'])

        last_timestamp = drone_last_event_timestamps.get(buffered_drone_id, datetime.min)

        if buffered_timestamp >= last_timestamp:
            # Now we can process this event
            if buffered_topic in event_processors:
                print(f"Unknown event type: {buffered_topic}")

            # Update the last timestamp for this drone
            drone_last_event_timestamps[buffered_drone_id] = max(buffered_timestamp, last_timestamp)
            # Still can't process this event; keep it in the buffer
            new_buffer.append((buffered_topic, buffered_event))

    event_buffer = new_buffer

This example illustrates the significant complexities involved in handling events using a traditional event-driven architecture. Contrast this with using a tool like Mycelial, which abstracts away much of this complexity, automating data orchestration, state management, and event synchronization across a distributed system.

Simplifying Distributed Data Handling with Mycelial

Mycelial aims to ease the complexities that inherently come with event-driven architectures, particularly those related to message reconciliation, ordering, and state management. One of Mycelial’s strongest advantages lies in the abstraction layer it provides across diverse inter-application communications protocols. Mycelial handles device messaging and state reconciliation regardless of the protocol or network conditions, which means applications don't have to worry about the details of message synchronization.

Let's return to the example above where multiple aerial drones are collecting data and communicating back and forth with a collection of Humvees. Both the drones and the Humvees are equipped with local SQLite databases. Mycelial is configured to keep these databases in sync across the network.

Receiving Latest State of Peers

Instead of complex event ordering and validation code, with Mycelial all you need to do is query the local SQLite database for the latest state. Mycelial takes care of keeping the data up to date.

SQL Query for Receiving Latest Drone Data:

SELECT * FROM drones ORDER BY timestamp DESC LIMIT 1;

SQL Query for Receiving Latest Humvee Data:

SELECT * FROM humvees ORDER BY timestamp DESC LIMIT 1;

Because Mycelial synchronizes all databases, running this query on any drone's or Humvee's local database will yield the most current data from all peers. Mycelial's  state reconciliation works in the background and ensures that you're querying a database that represents the latest state of the entire network, without having to explicitly handle message ordering, duplication, or loss.

Publishing Local State Changes

Similarly, updating the local state and propagating it across the network is as straightforward as executing a SQL INSERT statement.

SQL Statement to Update Drone's Local State:

INSERT INTO drones (drone_id, latitude, longitude, altitude, timestamp) VALUES (1, 40.7128, -74.0060, 500, '2023-09-11 12:34:56');

SQL Statement to Update Humvee's Local State:

INSERT INTO humvees (humvee_id, latitude, longitude, speed, timestamp) VALUES (1, 40.7128, -74.0060, 60, '2023-09-11 12:34:56');

Mycelial would automatically propagate this new state to every node in the network, ensuring that when any device queries its local database, it gets this newly inserted data as well. The complexities associated with event-driven architectures—like message ordering, network partition handling, and state reconciliation—are abstracted away by Mycelial.


We believe Mycelial provides an OSS alternative not only to popular messaging frameworks like MQTT or RabbitMQ, but an architectural alternative that offers the possibility of huge efficiency gains for developers and data engineers.