Life on the Edge: Solving the Optimization Problem

Dr. Nikunj Mehta, Founder & CEO, Falkonry, explains how the edge is not a place but an optimization problem. Learn about some key factors in deciding what level of edge computing is right for you and how to get there.

The edge is not a place – It is an optimization problem. Edge computing is about doing the right things in the right places. As with all optimization problems, getting to the “right” answer requires considering several tradeoffs specific to your situation and then applying the right technology to maximize the benefits for the cost you are willing to pay.

The Edge: Terms and Technology

Part of what makes Edge confusing is that definitions of “the edge” tend to focus on technologies rather than on use cases. Since use cases span an extensive range of requirements and the boundaries between those use cases don’t map directly to technologies, the basics may be getting lost in translation. That said, here is a framework that can be helpful.

A Framework for Edge Computing
Source: Falkonry

The basic idea behind edge computing is that the closer the compute hardware is to the equipment generating data, the closer to the edge it is. Computing domains can be divided into edge and cloud – local to your network or outside your network. Within each domain, options can be further subdivided:

1. Cloud – Outside your local area network

    • Cloud data centers – Large facilities host computing, storage and networking resources in a central location. In general, this puts a considerable distance between data generators and compute resources.
    • Edge cloud – Smaller versions of data centers collocated with wireless transmission towers. This puts a minimum distance between cellular WAN (Wide Area Network) or LPWAN (Low Power WAN) receivers and compute resources. This also includes distributed cloud data centers that are strategically located to minimize the distance from customer access points.

2. Edge – Inside your local area network

    • Edge servers – Smaller footprint but general high-performance servers that are on-premise and may or may not be close to actual equipment. There is a range of placement options (from the local data center to the plant floor), necessitating a list of ruggedization options.
    • Intelligent gateways – Lower performance, minimal footprint, general-purpose computers usually placed close to the connected equipment. These are frequently on the plant floor or in the field and need to be ruggedized.
    • Smart sensors – Special purpose, low-power, often microcontroller-based compute dedicated to data collection or processing for a specific sensor or device.

Plants will generally have a mix of compute capabilities that serve different purposes, so expect to see some or all of these deployment options in any facility that you work with. The important point is that there is a range of capabilities and that those can be grouped, roughly, into categories. There is nothing inherently superior about any of the options. Instead, choosing which one(s) to use depends on what you want to achieve.

See More: Edge Computing: Why the Future Is Now

Edge vs. Cloud: Optimizing for Constraints 

Edge computing allows you to perform tasks that cloud computing cannot. Likewise, cloud computing is good at things that edge computing is not. Therefore it is essential to consider the problem you’re trying to solve to select the right approach. The table below outlines some key areas to consider.

Factor Description Edge Cloud
Latency The amount of time it takes for data to make it to the compute resource and for a result to be returned. Lower latency (few ms) due to proximity to equipment Higher latency (100s of ms) due to distance from equipment
Intermittency The fraction of the time that the data-generating equipment cannot transmit data to remote compute resources. Supports highly sporadic cases. Compute can be collocated with equipment Supports issues with low intermittency. Requires always-connected network configuration
Data transfer costs The price per bit of sending data from the equipment generating it to compute. Lower cost as data travels less distance and typically over LAN (Local Area Network). Higher price as data travels over (expensive) cellular or other WANs.
Computational power The number, speed and bit depth of CPU cores, GPUs and memory that can be applied to computation. Generally less powerful hardware the closer it is to the equipment due to smaller form factors and electrical power limitations Generally more powerful hardware available due to larger form factors and higher electrical power consumption limits
Storage costs The price per MB of stored data. Higher price per MB due to smaller form factors and ruggedization Lower price per MB due to larger form factors, controlled environments and storage tiering
Scalability Ease with which compute resources can be increased. Less flexible scaling due to capex required for edge hardware and possible redesign and requalification of equipment. More flexible scaling due to opex-based paths for increasing resources in cloud
Sharing / Collaboration Ease with which data and application output may be shared among widely dispersed groups (e.g., globally). Lower level of sharing due to localized infrastructure High level of sharing due to centralized, cloud-based infrastructure
Data retention/reporting Ability to serve as a system of record from which compliance reports and audits can be performed. More challenging to find data and create reports due to distributed infrastructure Easier to find data and create reports due to centralized infrastructure with large storage capacity.

Table 1: Considerations for Selecting Cloud vs. Edge Computing

Long story short, Edge is a good choice for applications that need results VERY quickly (low 10s of milliseconds), that generate significant amounts of data in locations that have intermittent, low-bandwidth or high-cost connections and that don’t require highly complex calculations. Cloud-based infrastructure is better for almost everything else – if not for performance, then for procurement cost and operational complexity.

See More: Edge Computing vs. Cloud Computing: 10 Key Comparisons

Where Is Your Edge?

An example can help highlight how different equipment operations might be better served by different combinations of Edge and Cloud technologies.

A mining operator has a fleet of mobile drilling rigs deployed to various locations in a mine. They are concerned about three things: 

    1. Emissions compliance for the rig’s exhaust
    2. Signs of very near term equipment failure during operation
    3. Condition Based Maintenance (CBM) planning. The nature of the deployment means that the drill rigs are frequently outside of the WiFi range of the rig storage facility. 

Cellular coverage is non-existent at this location. Constructing and deploying private cell towers is cost-prohibitive, and wireless LPWAN technologies lack the bandwidth for these applications. The table below shows how one might consider which computing technologies best address these three needs.


Bold text indicates high relevance to technical approach

1) Emissions monitoring 2) Criticality of anticipating failure 3) CBM planning
Latency No requirement

Responses to emissions issues are manual, so a few seconds of delay is unimportant.

No requirement

There are no automated responses to warnings, so a little delay is unimportant.

No requirement

Maintenance planning is done daily. Minor delay is not essential.

Data delay due to Intermittency Not tolerable

Violations must be dealt with in the field, away from network connections.

Not tolerable

Near-term failures must be assessed and avoided in the field, away from network connections.


Important precursors to maintenance are visible weeks in advance, well within the normal schedule for drills to come within range of WiFi stations.

Data transfer costs High

Sending data in a way that meets intermittency constraints would require satellite links.


Sending data in a way that meets intermittency constraints would require satellite link


Existing WiFi networks can move data in a way that meets intermittency constraints.

Computational power Low

Smart sensor solution exists to calculate exhaust concentrations and report violations.


Assessment models are inference-only and run on lightweight compute hardware.


Schedule optimization algorithms are computationally intense for the whole fleet.

Storage needs (storage costs) Low

Store detailed data only on violation with a limited sampling of normal operations.


Store multiple channels of signal data and anticipate events for several weeks.


Store all signal data and maintenance records from all drills in the fleet for various years.

Scalability Low

Not crucial to expand the emissions monitoring approach other than when adding new drills.


More complex techniques and additional failure models may require additional compute resources.


More complex optimization techniques and additional CBM factors may require additional compute resources.

Sharing / Collaboration Low

Recovery actions are taken by the drill operator independently.


Recovery actions are taken by the drill operator independently.


Maintenance scheduling is performed by a local team. Visibility into factors and scenarios across the team is essential. Limited need for global teams to see the details underlying maintenance scheduling.

Data retention / reporting High

Emissions compliance requires long-term, complete records.


Model improvement requires long-term storage of signal data across the fleet for best outcomes.


Model improvement requires long-term storage of signal data, actual maintenance reports and predicted maintenance instances across the fleet. Maintenance cost KPIs are derived from this data.

Technology approach Requirement for consistent reports in a disconnected environment with high data transfer costs demands an on-vehicle (edge-based) approach.

Using an intelligent sensor with the required data collection and processing capabilities meets those needs.

Data retention requirement for compliance reporting is not easily met by the intelligent sensor alone, so an on-vehicle storage system is needed to hold emissions data in the short term and to transfer it periodically to long-term storage in a compliance DB.

Requirement for consistent reports in a disconnected environment with high data transfer costs demand on an on-vehicle (edge-based) approach. Because of the need for higher computational flexibility and larger storage capacity, a ruggedized, intelligent gateway or small industrial PC (IPC) is needed. This configuration will support periodic off-loading sensor and prediction data to a central storage and learning system for long-term analysis and model updates. Requirements for high levels of compute power and storage, the longer time frame allowable for processing data, the need to share ongoing work easily and generate consolidated reports about fleet behavior suggest that a datacenter-based approach to infrastructure is best.

Table 2: Choosing between edge and cloud for three aspects of operating mining equipment

Making the Edge Work for You

Because this is an optimization problem whose solution depends on the goals of the organization and the specifics of the questions being asked of the data, flexible deployment options are essential when considering analytic solutions for your company. Most requirements can be fulfilled by choosing a platform that can accommodate a range of capabilities for both cloud and edge, coupled with plug and play capabilities. 

Flexible deployment supports both the high computational and storage loads required for model learning and the lower loads of inference in a well-connected environment. In certain situations where security concerns are critical, it helps if your analytics vendor can provide lightweight, containerized instances of the inference engine, which can run on smaller footprint compute hardware, such as gateways or IPCs, and that can function entirely disconnected from the cloud. Air gapped versions of analytics packages that bring the software’s full capabilities into a self-contained “cloud in a box” can also be deployed entirely behind firewalls or in environments where internet connectivity is unavailable. 

Key Takeaways:

    • The Edge is an optimization problem – It is not a place and not a single technology.
    • There are several factors to consider when deciding whether and what kind of distributed computing technology to use in pursuit of solving a business problem.
    • The ability to support a spectrum of edge and cloud deployment options is valuable in choosing the right vendor.

What do you think are the biggest pitfalls in optimizing edge computing? Share with us on LinkedIn, Twitter, or Facebook. We’d love to hear what you have to say!

More on Edge Computing:

https://www.toolbox.com/tech/edge-computing/guest-article/life-on-the-edge-solving-the-optimization-problem/ Life on the Edge: Solving the Optimization Problem

Back to top button