The industrial sector is redefining the definition of “edge”. This year, many industry players led conversations regarding the exact definition and various locations of the edge. Organizations have struggled to understand the precise location of the edge when, in reality, the location is highly dynamic and varies by industry and use case.
Additionally, some solutions have adopted edge terminology without considering its exact characteristics, introducing more confusion to the market. These solutions are not ‘true edge’ as they rely on the cloud for data processing, rather than processing data at the edge. For example, weak edge solutions lack the ability to optimally run analytics and machine learning models on the live streaming data in a constrained compute environment, a crucial requirement for deriving actionable insights in real-time.
Lastly, there is confusion regarding the edge-cloud relationship. Edge is certainly complementary to cloud in most use cases. However, in the industrial sector, the edge greatly enhances the cloud adoption and value. Over the next year, edge computing leaders will continuously work to evolve and refine answers to questions such as: where is the edge located, what is edge computing; and why is the edge important.
The shift from cloud only to cloud-edge hybrid strategies
Being able to analyze high-fidelity, high-resolution, raw machine data in the cloud is often expensive and does not happen in real-time due to transport and ecosystem considerations. Organizations often depend on down-sampled or time deferred data to avoid significant cost constraints, and as a result, organizations miss critical insights as they’re only looking at incomplete datasets.
Instead, by implementing edge-first solutions, organizations can synthesize data locally, identify machine learning inferences on core raw data sets, and deliver enhanced predictive capabilities (versus cloud-heavy, expensive, retroactive insights). By running ‘edgified’ versions of ML models in real-time, organizations enable faster responses to real-time events and the ability to act, react, pro-act to events of interest at the source. This ensures a harmonious interplay of edge and cloud, leveraging the strengths of each ecosystem.
Indeed, in the next few years, more than 40% of organizations' cloud deployments will include edge computing to address bandwidth bottlenecks, reduce latency, and process data for mission-critical decision support in real-time. These edge-powered, IIoT projects will extract a realistic view of daily machine operations and work towards a new level of predictability that will dramatically alter the industry landscape as we know it. In short, in 2020, cloud-dominated solutions will adopt a more edge-first, or cloud-edge hybrid, approach to drive significant business value.
Looking beyond edge computing to edge AI solutions to deliver optimal ROI
When organizations build ML models, an assumption is made that the model will be accurate for a certain period of time, as the model has been trained on a particular set of data. If new data patterns emerge or if the model has not been trained on all possible data sets or workflows, the model might be biased and not continue to provide accurate results. By employing edge AI, the models can be continuously updated with new, meaningful data and the learning sets updated.
For example, in a factory, a model can be deployed to detect defects on a part inspection assembly line or proactively identify patterns that may lead to defects after a period of time. Often, after a few months, the model’s accuracy may diminish due to new data patterns. This can be misleading, and the opportunity cost can be significant if the software uses traditional analytics exclusively.
Using the power of artificial intelligence (AI) at the edge and self-learning models, in 2020, ML models can move beyond traditional analytics capabilities and significantly improve predictive functionality and overall ROI. With edge AI, software can proactively interface with live data streams and cater to intelligence at or near the source, leading to increased overall productivity, efficiency, and cost-savings.
Improving Data Quality and Quantity to Drive Actionable Insights
While many see connectivity limitations, security risks, and data bias issues, including data quantity, as roadblocks to IoT success, data quality also plays a critical role in delivering effective IoT projects. Organizations can only make the right data-driven decisions if the data used is correct and suitable for the use case at hand. Edge computing plays an essential role in evaluating and delivering heightened data quality, as edge-enabled solutions can perform real-time analysis of disparate data streams and identify only the most valuable insights for further processing and AI training.
Looking ahead, data processing and enrichment at the edge will contribute to IIoT success by identifying and addressing false and inaccurate machine learning models that lead to dangerous machine failures, declining operational productivity, and significant cost issues.