OpenPPL is SenseTime’s open source deep learning reasoning platform based on self-developed high-performance operator library, which enables artificial intelligence applications to run efficiently and reliably on existing computing platforms such as CPU and GPU, providing artificial intelligence reasoning services for cloud scenarios.
Liverpoolfc.tv: openppl. Ai
In the just-held World Artificial Intelligence Conference 2021 (WAC), SenseTime officially launched the OpenPPL program — decided to open source the cloud reasoning capability of deep learning reasoning deployment engine Senseppl to the technology community, so as to accelerate the popularization and progress of AI technology!
▎ Give your reason to OpenPPL, your time to think
OpenPPL is based on a fully self-developed high-performance operator library, which has the performance of extreme tuning. At the same time, it provides multiple back-end deployment capability of AI models in cloud native environment, and supports efficient deployment of deep learning models such as OpenMMLab.
I. High performance
Design micro-structure-friendly multi-level parallel strategies such as task/data/instruction, and self-developed NV GPU and x86 CPU computing libraries to meet the performance requirements of deployment scenarios for neural network reasoning and common image processing
- Support GPU T4 platform FP16 reasoning
- Supports CPU x86 platform FP32 reasoning
- Core operator optimization, average performance leading the industry
II. OpenMMLab Deployment
Supports OpenMMLab detection, classification, segmentation, supergrade series of frontier models, and provides image processing operators required for model before and after processing
- Follow the ONNX open standard and provide ONNX transformation support
- Support network dynamic features
- Provides high performance implementation of MMCV operator
III. Multiple back-end deployment on the cloud
Cloud oriented heterogeneous reasoning scenarios, supporting multi-platform deployment
- Supports x86 FMA & AVX512, NV Turing architecture
- Support parallel reasoning of heterogeneous devices
App Program Links
Welcome star, welcome to issue~
- openppl-public/ppl.nn
- openppl-public/ppl.cv
🔗 Contact us: OpenPPL
▎ epilogue
The evolution of machine learning is far from over, and we will keep an eye on what happens in the industry. OpenPPL will absorb the needs of the industry, maintain and improve the types of operators, the types of models supported, and reason the whole chain of long-term optimization models.
Communication QQ group: 627853444