Huawei Connect 2018 - the smart police are coming

Profile picture for user cmiddleton By Chris Middleton October 11, 2018
Summary:
A final report from Huawei Connect in Shanghai, as the vendor pushes a vision of smart safety and security.
liQuing
Li Quiang

Smart cities were a key focus at Huawei Connect 2018 in Shanghai, with the Internet of Things (IoT) opening up new insights into how people use spaces, buildings, transport networks, and services.

Within China, Huawei’s AI + Digital Platform is doing just that in a new partnership with Tianjin BinHai New Area, where the fast-expanding business/tech zone is becoming a self-contained smart city.

The Huawei solution is built on a ‘1 + 4 + N’ model: one intelligent operations centre (IOC), four artificial intelligence (AI) platforms, and an expanding range of innovative applications (‘N’) that can run on the system.

For Huawei, the IOC is seen as the brain of any smart city. Joe So, Huawei’s CTO of industry solutions, described the strategy as “building the nervous system” of cities, via a smart national ICT infrastructure: one cloud, two networks (IoT narrow band, and mobile), and three overarching platforms: the IOC, plus big data service support and an ICT application-enabling platform, with layers of apps on top.

In Tianjin BinHai New Area, Huawei’s four AI platforms provide a range of services, including resident care and personalised resources for citizens, along with enterprise services to match resources with needs.

Earlier this year, Tianjin separately announced its own $16 billion AI investment strategy – a single Chinese city pulling together investment that is 12 times larger than the UK’s entire Sector Deal for AI.

Often when people talk about smart cities, it’s in the context of energy, efficiency, and sustainability, but for Huawei and some of its customers, the core advantage appears to be safety and security – especially via Huawei’s software-driven camera offerings, which were on conspicuous display in the exhibition hall.

At Connect 2018, Huawei also released HiSec, an intelligent security solution. Based on what it calls the ‘Identify, Protect, Detect, Respond, and Recover’ (IPDRR) architecture, HiSec “provides customers with intelligent, efficient, and future-oriented end-to-end security, offers comprehensive protection, and provides public-security capabilities for IoT, SoftCOM, private cloud, Safe City, and 5G solutions.”

The strong safety and security theme was emphasised by some of the other AI offerings being rolled out in Tianjin BinHai. ‘Resident Voices’ features voice recognition and semantic parsing technologies that enable city managers to “understand the voice of each resident to gain insight into their needs”. Sensing the City, meanwhile, uses image recognition and correlation analysis to explore the relationships between people, places, vehicles and things “for the purpose of fostering harmony for all”.

In other words, smart policing and surveillance.

AI

Li Qiang, Chief of Technology at Shenzhen Traffic Police, said:

We're using AI to build smart policing technology and an intelligent traffic brain to improve the travel experience for all.

AI has made law enforcement easier and more efficient than ever before, he added. With the help of AI applications that detect violations such as talking on the phone while driving and not wearing a seat belt, Shenzhen Traffic Police has been able to increase its law enforcement activities by 15% in the first half of 2018.

The technology also redefines the way traffic is managed, he said. The focus has shifted away from regulating traffic with traffic lights and towards a more efficient system of letting vehicles move based on their numbers in any given location.

But not all policing is about catching bad drivers and spotting parking violations. Since 2011, Huawei has invested over $1 billion in public safety technologies, including over $200 million this year alone. Along with 24,000 patents, it has more than 6,000 staff supporting public safety, including 4,000 R&D workers and 1,000 in tech support.

Many of them are “former police officers who used to carry a gun chasing criminals”, according to one spokesman. Hong-Eng Koh is global public safety scientist at Huawei. On LinkedIn Koh describes himself as ‘Father. Husband. Ex-Cop. Crime fighter with ICT. Globetrotter. Storyteller. Foodie. Cybernaut.’ In a safe cities presentation to journalists, he said:

Bad guys are getting smarter. It’s cat and mouse. The most important collaboration is in communities. It takes a network to fight a network.

In this context, more inter-agency collaboration locally and across borders is necessary, he added – together with smart, safe cities provided by Huawei’s technology, it seems, in partnership with local authorities and cities themselves.

This vision of all-pervasive AI in fast-expandingcities plays into a vision that many have in the West of China building a tightly controlled society, monitoring the use of platforms such as WeChat – the lingua franca of its digital citizens – and creating a national social ratings platform, which will be compulsory from 2020.

And according to Huawei, China is now exporting that AI-enabled control ability to countries that share that vision, including Saudi Arabia, Pakistan, Venezuela, Singapore, Thailand, Serbia, and Ghana – in all, 160 cities in 40 countries.

According to Huawei’s So, the smart city’s roots lie in e-government, with mobility being version 2.0 and the IoT bringing version 3.0 – a core system to integrate technology and urban governance, he said.

However, one of the challenges is that facial recognition systems have often proved fallible in real-world deployments. In the UK, for example, questions were asked in Parliament earlier this year about the Metropolitan Police’s two percent success rate with its real-time face recognition system: two suspects correctly identified, and over 100 false positives in tests.

In the US, Microsoft has urged the government to regulate the technology, while Amazon has faced internal and external criticism about the twin problems of racial bias and profiling via its Rekognition system. Poor training data in facial recognition is an industry-wide problem, putting ethnic minorities and women at risk.

Asked about exporting technologies that may contain biases, or that may be used by governments to clamp down on citizens, especially minorities, Huawei’s spokesmen (and, apart from their translators, everyone onstage at Huawei Connect was a middle-aged man), the answer was consistent: we just make the technology. We don’t tell customers how to use it. And we don’t control the data.

Koh, So, and others all said the same thing, with So drawing an analogy with a bank: Huawei provides the banking technology, in effect, but it doesn’t tell customers what to do with the money.

My take

Huawei’s commitment to smart cities, safety, and security was clear from the event, and there’s no doubt that the market for its solutions is strong, especially among countries that share China’s state-centric outlook on the world.

However, despite the many similarities between China and the US, for example, when it comes to the development of technologies such as the IoT, AI, smart cities, robotics, and autonomous cars, one thing is clear in the US: many tech companies’ own employees are uncomfortable with applications that stray into defence and surveillance deployments. Witness Google’s decision to pull out of the Pentagon’s Project Maven and JEDI programmes, for example – driven by employee rebellion.

That said, Google’s employees have also protested at the internal development of a censored search facility for the Chinese market, and there’s no news yet of that project being abandoned.

The point is this: more and more technology companies are realising that an ethical stance on these issues plays well with many customers – if not with the biggest ones: governments.

As Huawei and others battle to retain their foothold in Western markets in the growing trade war between the US and China, it may have to start modifying its message in some territories, or risk being seen as actively supporting repressive regimes, without taking any responsibility for how its technologies are being deployed.