OpenPPL is sensetime’s open source deep learning reasoning platform based on its self-developed high-performance operator library, which enables ARTIFICIAL intelligence applications to run efficiently and reliably on existing COMPUTING platforms such as CPU and GPU, providing ARTIFICIAL intelligence reasoning services for cloud scenarios.

Liverpoolfc.tv: openppl. Ai

At the 2021 World Artificial Intelligence Conference (WAIC), Sensetime officially launched the OpenPPL program — decided to open source the cloud-based reasoning capabilities of the deep learning reasoning deployment engine SensePPL to the technical community, thus accelerating the popularization and progress of AI technology!

▎ gives reasoning to OpenPPL, and time to think

OpenPPL is based on the self-developed high-performance operator library, with the ultimate performance tuning; At the same time, it provides multi-back-end deployment capability of AI models in cloud native environment, and supports efficient deployment of deep learning models such as OpenMMLab.

One, high performance

Design microarchitecture-friendly multi-level parallel strategies such as task/data/instruction, and independently develop NV GPU and x86 CPU computing libraries to meet the performance requirements of neural network reasoning and common image processing in deployment scenarios

  • Support GPU T4 platform FP16 reasoning
  • Supports CPU x86 FP32 reasoning
  • Core operator optimization, average performance leading industry

Ii. OpenMMLab deployment

Support OpenMMLab detection, classification, segmentation, super-classification series of frontier models, and provide image processing operators required before and after model processing

  • ONNX transformation support is provided in accordance with the ONNX open standard
  • Supports dynamic network features
  • Provides high performance implementation of MMCV operator

3. Multiple back-end deployments on the cloud

Cloud-oriented heterogeneous reasoning scenarios, supporting multi-platform deployment

  • Support for x86 FMA & AVX512, NV Turing architecture
  • Supports parallel reasoning between heterogeneous devices

▎ Program Link

Welcome to star, welcome to issue~

  • openppl-public/ppl.nn
  • openppl-public/ppl.cv

🔗 Contact us: OpenPPL

▎ epilogue

The evolution of machine learning is far from over, and we will always keep an eye on the industry’s progress. OpenPPL will absorb the needs of the industry, maintain and improve the types of operators and models supported in the long term, and optimize the model inference chain in the long term.

Communication QQ group: 627853444