Synesis Uses AI for Smart City Video Applications

Using Intel® Distribution of OpenVINO™ toolkit and AI, new data compression reduces video storage and bandwidth requirements.

The ability to generate sustainable revenue growth is crucial to survival in the fast- paced cloud service provider (CSP) market segment. This is why Belarusian service provider Synesis selected Intel® technology to support Kipod* – its artificial intelligence (AI-based) video platform for smart cities, public safety, and law enforcement. Intel® architecture, specifically Intel® Advanced Vector Extensions 2.0 (Intel® AVX2), and Intel® Advanced Vector Extensions 512 (Intel® AVX-512) together with Aleph* compression technology, are enabling Synesis to reduce the use of CPU, RAM, and video storage in its Kipod platform. Thus, making Kipod financially attractive to a wider pool of customers and supporting Synesis’s expansion strategy into new market segments.

Challenge

  • Generate sustainable revenue growth to remain competitive in the fiercely crowded CSP market segment
  • Reduce the server cost per camera of the Kipod AI-based video platform for smart city, public safety, and law enforcement
  • Select a technology platform with sufficient performance and flexibility to support this new offering

Solution

  • Kipod is optimized to run on Intel® Xeon® Scalable processors, although it can also run on all previous-generation Intel® processor technologies
  • For inference, Synesis chooses to run its convolutional neural networks (CNNs) with either the Intel® Inference Engine or Synet– its own in-house developed engine, choosing the best-performing engine for the task in hand
  • Using Intel AVX2, Synesis estimates that it has been able to boost the performance of its Kipod platform and reduce associated hardware costs by a factor of three. Synesis also estimates that Intel AVX-512 will allow it to reduce the total number of CPU nodes by a further 50 percent
  • Synesis is also investigating the benefits of a new data compression algorithm, Aleph, to further reduce video storage and bandwidth requirements

Results

  • Intel architecture, specifically using built-in functions of Intel AVX2 and AVX-512 together with the Aleph lossless data compression algorithm, is enabling Synesis to reduce the use of CPU, RAM, and video storage in its Kipod platform
  • Ultimately this will allow Synesis to reduce the server cost per camera, bringing the Kipod platform within the reach of a wider pool of customers and supporting its expansion into new market segments

Business Challenge: Generating Sustainable Revenue Growth

Synesis is a CSP headquartered in Minsk, Belarus. With core competences in artificial intelligence (AI), cloud, big data, and instant messaging, Synesis runs large projects for government and global consumers. These projects include smart cities and public safety, sports and event management, infrastructure, messengers and chatbots, and games and social casinos.

Like all CSPs, Synesis is challenged to generate sustainable revenue growth in a highly competitive market segment. To this end, Synesis has developed Kipod - an AI-based platform for smart cities, public safety, and law enforcement, which generates long-term revenue streams through its software-as-a-service model (SaaS).

Kipod enables instant search across the big data of CCTV video and real-time crime detection, using advanced machine learning algorithms and video content analytics. It enables unlimited users from different organizations to collaborate and analyze unlimited amounts of data for vehicle identification, crowd control, traffic analysis, threat detection, audio analytics, and more. Kipod is General Data Protection Regulation (GDPR) compliant and built on open standards, offering users full transparency.

Synesis currently operates Kipod in Azerbaijan, Belarus, Kazakhstan, Russia, and the United Kingdom on a national scale and is now looking to break into new market segments across the Middle East, Asia, and North America. To help with this expansion strategy, Synesis is investigating how it can make Kipod attractive to an even wider pool of customers by minimizing the server cost per camera, while maintaining superior performance.

Kipod*: Underpinned by Intel® Technology

Kipod is an open source platform that runs on Intel architecture. Currently, Kipod is optimized to run on the Intel Xeon Scalable processor, although it can also run on all previous-generation Intel® processor technologies. Using Intel AVX2, Synesis estimates that it has already been able to boost the performance of its Kipod platform and reduce associated hardware costs by a factor of three.

For inference, depending on the task in hand, Synesis chooses to run its convolutional neural networks (CNNs) with either the Intel Inference Engine or Synet – its own in-house developed engine. A component of the Intel® Distribution of OpenVINO™ toolkit, the Intel Inference Engine provides a unified application program interface (API) for supported Intel® platforms that might have different inference low-level APIs. The Inference Engine executes different layers on different target platforms, using a unified API to work on Infrared (IR) files and optimize inference with application logic to deploy deep learning solutions.

Synesis estimates that around half of its CNNs perform best with the Intel Inference Engine, while the other half perform best with Synet. For example, a recent proof of concept (PoC) showed that the Intel Inference Engine outperforms Synet running the arcface face recognition model, returning results in 72.934 milliseconds (ms) compared to 75.470 ms, but that Synet performs better for sphereface_v2 (26.007 ms compared to 40.954 ms ) and fqa_dw (0.203 ms compared to 0.227 ms – see figure 1.1

Figure 1. Performance of the Intel Inference Engine and Synet running different face recognition models.

Currently, the Synesis team is successfully using the Intel Inference Engine for virtual video analytics modules: face recognition, number plate reading, vehicle type and color detection, and behavioral analytics – for example, unattended luggage and crowd detection. Compared to the underlying framework it used to train its CNN (darknet*/ caffe*/tensorflow*pytorch*), Synesis estimates that the Intel Inference Engine has enabled it to improve the performance of its virtual video analytics modules by 250-400 percent.

Synesis also estimates that Intel AVX-512 – a feature of Intel Xeon Scalable processors – will allow it to reduce the total number of CPU nodes by a further 50 percent, on top of the gains it has already achieved using Intel AVX2. Intel AVX-512 boosts performance and throughput for the most demanding computational tasks in applications, such as modeling and simulation, data analytics and machine learning, data compression, visualization, and digital content creation.

For the Kipod platform, Synesis also uses Intel® Threading Building Blocks (Intel® TBB) software products, the Intel® Math Kernel Library (Intel® MKL), and Intel® Hyper-Threading Technology (Intel® HT Technology):

  • Intel TBB is a C++ template library developed by Intel for parallel programming on multi-core processors. Using TBB, Synesis is able to break computations down into tasks that can run in parallel. The library manages and schedules threads to execute these tasks.
  • Intel MKL allows Synesis to increase performance and reduce development times with optimized math functions including BLAS*, LAPACK*, ScaLAPACK*, sparse solvers*, fast Fourier transforms*, and vector math*.
  • Intel HT Technology uses processor resources more efficiently, enabling multiple threads to run on each core. It enables Synesis to increase processor throughput and improve overall performance on threaded software.

Business-Enabling Benefits of Intel® Technology

The Kipod platform provides operators of smart city, public safety, and law enforcement projects with a carrier-class AI-based video application pre-trained on large datasets for person and vehicle identification, behavior and traffic analysis, threat detection, and audio analytics.

Running the platform on Intel CPUs rather than competitor CPUs plus GPU architecture enables Synesis to quickly and cost-effectively deliver its cloud-based AI platform. “Developing software for a CPU is faster than for a CPU/GPU solution as instructions do not have to be sent back and forth to a GPU subsystem,” explains Nikolai Ptitsyn, managing partner at Synesis. “It’s easier to orchestrate an architecture based purely on Intel CPUs and we no longer need to deal with multiple vendors. Customers can also benefit from greater flexibility as well as simpler and more cost-effective software support, cooling, and maintenance.”

Metro CCTV* is using the Kipod AI platform to monitor petrol stations, oil terminals, power stations, and other infrastructure across the UK. “The AI makes it possible for one agent to keep an eye on 2,000+ cameras. The Kipod AI platform detects suspicious behavior, as well as recognizes human faces, vehicle license plates, and vehicle types. All analytics run on Intel CPUs in real-time,” says John Coyle, the CEO of Metro CCTV.

Kipod was also a key component of the technology solution Synesis provided to the organizers of the 2019 European Games in Minsk, Belarus. Kipod delivered on-demand security infrastructure from the cloud to the sporting venues, enabling the security staff to collaborate efficiently and keep safe the millions of athletes, support staff, and members of public attending the events. Security staff benefited from Kipod’s AI capabilities for face and vehicle recognition, suspicious behavior detection, street traffic analytics, and audio analytics.

Figure 2. Nikolai Ptitsyn, Kirill Sancharov, and Alexander Shatrov of Synesis Aleph.

Breakthrough Compression Performance

Synesis is also investigating the benefits of new data compression algorithm Aleph to further reduce the video storage costs of the Kipod platform.

Aleph is a new general-purpose lossless compression algorithm. Based on patent-pending technology, Aleph is expected to outperform all existing codecs such as Brotli* and 7-Zip* by increasing the compression ratio of images, video or text, while taking a similar amount of CPU cycles and memory.

Nikolai Ptitsyn says: “This next-generation compression will boost Kipod, our video-based AI for smart cities, and will provide us with an unparalleled competitive advantage. We are also working on integrating the Aleph codec in software-defined distributed storages such as Ceph*.”

To improve performance even further, Synesis also plans to optimize the Aleph Codec for the Intel AVX-512 instruction set.

Figure 3. Nikolai Ptitsyn, Synesis managing partner.

Conclusion

Intel architecture, specifically Intel AVX2 and Intel AVX-512 together with Aleph compression technology, is enabling Synesis to reduce the use of CPU, RAM, and video storage in its Kipod platform, while maintaining superior performance. Ultimately this will allow Synesis to reduce the server cost per camera, bringing the Kipod platform within the reach of a wider pool of customers and supporting its expansion into new market segments.

Moving forward Synesis is interested in investigating Intel® Deep Learning Boost (Intel® DL Boost) on 2nd generation Intel Xeon Scalable processors. Based on Intel AVX-512, Intel DL Boost’s Vector Neural Network Instructions (VNNI) speeds the delivery of inference results. Nikolai Ptitsyn estimates this instruction set could allow Synesis to improve the performance of its CNN inference up to 4x compared to Intel’s existing solutions, providing it is able to practically adapt its algorithms to use 8-bit integers.

Looking even further ahead, Synesis is also interested in the Intel AVX-512 BF16 instruction set – a feature of future Intel Xeon Scalable processors code named “Cooper Lake.” Nikolai Ptitsyn says this instruction set could improve the performance of Synesis’s CNN inference by up to a further 2x, providing it is able to adapt its algorithms to use 16-bit floats (bfloat16).

Spotlight on Synesis

Synesis Group is a private IT firm and business incubator headquartered in Minsk, Belarus. Founded in 2007 as a software design powerhouse, Synesis helped Viber*, Playtika*, Yandex*, Alfresco*, and other well-known companies to develop successful products. Today, more than a billion people worldwide are using these products daily.

As of today, Synesis has evolved into a cloud service provider (CSP) operating large projects for governments and global consumers. With a team of 1200+ world-class engineers, extensive product portfolio, and strong patent position, Synesis is one of the leading companies in AI.

Lessons Learned

The key lessons that cloud service providers (CSPs) can learn from Synesis’s experience are:

  • Running AI-based video applications on Intel CPUs rather than hybrid CPU plus GPU architecture enables quick and cost- effective software development, easy orchestration, greater flexibility, as well as cost-effective software support, cooling, and maintenance for customers
  • Instruction sets Intel AVX2 and Intel AVX-512 improve CPU and RAM usage, lowering costs while maintaining superior performance
  • The new general-purpose lossless compression algorithm Aleph has the potential to further reduce video storage and bandwidth requirements

Technical Components of Solution

  • Intel Inference Engine has enabled Synesis to improve the performance of its virtual video analytics modules by 250-400 percent, compared to the underlying framework (darknet*/ caffe*/tensorflow*/pytorch*) it used to train its convolutional neural network (CNN)
  • Synesis estimates that Intel AVX2 has enabled it to boost the performance of its Kipod platform and reduce associated hardware costs by a factor of three. In the future Synesis estimates that Intel AVX-512 will allow it to reduce the total number of CPU nodes by a further 50 percent
  • Aleph data compression technology is expected to further reduce video storage and bandwidth requirements

Explore Related Intel® Products

Intel® Xeon® Scalable Processors

Drive actionable insight, count on hardware-based security, and deploy dynamic service delivery with Intel® Xeon® Scalable processors.

Learn more

Intel® Deep Learning Boost

Intel® Xeon® Scalable processors take embedded AI performance to the next level with Intel® Deep Learning Boost.

Learn more

OpenVINO™ Toolkit

Build end-to-end computer vision solutions quickly and consistently on Intel® architecture and our deep learning framework.

Learn more

Notices and Disclaimers

Intel® technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at https://www.intel.de. // Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit https://www.intel.de/benchmarks . // Performance results are based on testing as of the date set forth in the configurations and may not reflect all publicly available security updates. See configuration disclosure for details. No product or component can be absolutely secure. // Cost reduction scenarios described are intended as examples of how a given Intel®-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. // Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. // In some test cases, results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.

Produkt- und Leistungsinformationen

1

Alle Tests wurden auf dem Intel® Core™ i9-7900X X-series Prozessor, 3,3 GHz, 10 Kerne, 20 Threads mit Microsoft Windows* 10 Betriebssystem (Linux Untersystem, Ubuntu* 16.04 LTS) durchgeführt. Der Synet Test verwendete die Simd Library, die mit Intel® Advanced Vector Extensions 512F (Intel® AVX-512F) und Intel® Advanced Vector Extensions 512BW (Intel® AVX-512BW) CPU-Erweiterungen kompiliert wurde. Für die Intel® Inference Engine wurde die Version 201901. mit TBB verwendet (Download der kompilierten Version von der offiziellen Website). Die Tests wurden von Synesis im Mai 2019 durchgeführt.