New routes to the edge - different approaches emerge for the packaged functionality market

Martin Banks Profile picture for user mbanks July 1, 2021
Summary:
The edge thickens as the cloud is taken in for questioning about money down the drain...

Image of someone trusting their data

Meet Kalray, a spin-off from French Government-funded research organization, CEA, and the latest player to join Intel in the packaged edge functionality market.

Kalray broke cover at a recent online Technology Live briefing as a fab-less semiconductor operation that has come up with a new chip architecture, MPPA, and sub-system packaging model to target both cloud and emerging edge computing applications.

It is an inevitable development to be sure, and it helps evolve thinking around how the cloud plays out in the long term. The question now, put at its simplest, is whether this marks the beginning of a significant change of role for both the public cloud and the major service providers?

So far the trend has been that business/digital transformation has taken the position that as much as possible of the data compute and storage function should now be in the cloud, at in co-location facilities or in long-term contracted spaces of the mega-corp public cloud service providers. The developments out at the edge are starting to come thicker and faster, and the serious possibility is starting to emerge that the move away from private, on-premise resources and operations may reverse.

It won’t be `on-premises’ as it has been physically known for decades, of course. But deep down, it will logically seem quiet familiar, a factor that may indeed make its exploitation in new business opportunities, all the more alluring to both the technology functionaries tasked with making it happen and the C-suite suits that will look to plunder its capabilities. This will be because `on premises’ will radically move away from the central position of the data center as the lynch-pin holding everything together. Instead it will become just the repository of the metadata of what is happening where across the business, plus the `final results’ of much the work carried out elsewhere.

The ‘elsewhere’ will be widely dispersed around – indeed dissolved throughout - the corporate network where the work is being done. It will be increasingly granular in nature, where the difference between a sensor or machine and a compute node will be increasingly difficult to tell. And with operating software, such as that from Mimik Technology, the need for such specific differentiations will cease to be important. If the sensor or machine comes with sufficient logic to also process the data it is generating, it can be exploited to do just that.

On the other hand, the new modular, packaged sub-systems that will become possible using the Intel and/or Kalray approach will lead to many new implementations of the `intelligent sensor’ approach. These are likely to be deeply embedded into the value-generation end of business operations and management.

As for the public cloud service providers, they may find their roles restrained to two main areas - one being as the ultimate back office for businesses, the repository for the metadata and final results derived by the dissolved compute functions throughout every business. The second, which will be a huge business, will be as the focal resource for the delivery of all content delivery services, be they online games, streaming video services and the rest. Increasingly, however, all forms of `production work’ will be monitored, recorded, analyzed, automated, artificially managed, modified and adapted out in the edge, where the work is done.   

Kalray’s new way for the edge

It can be argued that the key difference between Intel’s approach and Kalray’s is that the former is, at least at the start, looking to build up a range of packaged edge sub-systems and solutions based on its existing semiconductor technologies, and in particular the x86 architecture processor range. Kalray, on the other hand, has come up with something new.

With its MPPA (Massively Parallel Processor Array) architecture and 16nm FinFet technology, the new Kalray Coolidge processor is a scalable 80-core processor in clusters of 16-cores. It is specifically designed as the core of edge-based intelligent systems. As such it is an alternative to the current trend for GPU, ASIC and FPGA data accelerators.

The cores work with VLIW (Very Long Instruction Word) instruction set architectures, specifically designed to exploit instruction level parallelism by encoding multiple operations into a single instruction and allowing programs to explicitly specify instructions to execute in parallel. This produces higher performance without greater chip complexity, such as pipelining, superscalar architectures and out-of-order instruction execution, so the processor cores are smaller, faster, and consume less energy. The trade-off is that the compiler becomes more complex.

Interfaces used include NVMe (Non-Volatile Memory express) and PCIe (Peripheral Component Interconnect express), both of which are already widely used and should therefore allow the logic and its physical manifestation as card-based sub-systems to be readily integrated into the majority of edge environments.

For its part in this market, Intel has already moved on to promoting the use of its packaged sub-systems in a number of applications areas, while claiming that among its own customer base it already has over 24,000 edge deployments in production environments. It has just produced a new whitepaper, The Edge Outlook, which not only sets out its stall in market sectors such as retailing, industrial and manufacturing applications, 5G and telecommunications services, and healthcare, but also outlines current user stories of why, how, and what has been achieved.

Edge and the over-reaching cloud dilemma

Another company at the Technology Live event was storage services specialist, Scality, which referenced a recent article from analysts Andreessen-Horowitz, `The Cost of Cloud, a Trillion Dollar Paradox’. This sets out a basic premise that while companies these days often start in the cloud, they don’t stay in it. Users are now looking to move back to private data centers because they view them as cheaper, and the provision of business agility easier to manage.

Some of this is certainly down to users operating on poor or non-existent cost/usage management regimes. Tools are available to help with this, such as those from Densify that provide real time resource and cost management. Indeed, Amazon AWS itself now offers tools for this job. But it is also down to the gross margins of around 30% Cloud Service providers are adding to user costs.

Scality CEO Jerome Lecat suggested that public cloud is no longer seen as infinite, particularly by larger, more demanding users, for CSPs are now having to free up resources to meet the need. This means booting off smaller users from the resources they have requested but may not be currently using.

The demand for cloud will also change as eats into data center resources demand and the distribution of workloads is spread and granularized around the edge. Also, the flow of data will change. Much of what heads to the centre will be metadata describing the data and pointing to its location at the edge. This could be an area where object storage technologies truly come into their own.

Lecat sees COSI (Container Object Storage Interface) as important for using object storage with Kubernetes. And Kubernetes itself could be important in loading the data needed by the applications and tools it also loads, so handling object-based metadata could be a vital tool in the delivery and use of finely granular, precisely targeted applications. It is now possible to search the data/metadata to pinpoint the required information on the basis of identified business need. This will include the ability for users to build their own filtering policies on the data selected.

My take

The rate of development out at the edge is accelerating and when those developments start throwing up new approaches, it is hard to make instant assessments. But the Kalray approach looks like it can add muscle to what edge computing can bring to the mix, especially as serious questions are now being asked about cloud and its capabilities in offering infinite services. Certainly the need to reduce the amount of data continually destined for centralized data centers is already becoming an urgent need, particularly as a goodly percentage probably has short lifecycle relevance at the best of times.

Loading
A grey colored placeholder image