preface

For those new to iOS graphics-related frameworks, there are some that can be confusing in terms of both literal and functional aspects. The purpose here is to summarize the various frameworks and distinguish their concepts and functions as a guide to further subdivision of learning. Therefore, this article will not elaborate on the specific framework, only to distinguish the guidance, the reader can choose the direction of further study. To this end, the author summarizes a diagram of various frameworks, as follows:

In general, the framework of iOS related to graphics and image processing is here:

    1. UI Graphics Framework — UIKit
    1. Core Animation framework
    1. Core Graphics & Quartz 2D
    1. Traditional cross-platform graphics framework – OpenGL ES
    1. Apple’s latest graphic framework — Metal
    1. Core Image, apple’s filter frame for images
    1. GPUImage is a third party filter scheme for video
    1. Game Engines — Scene Kit (3D) and Sprite Kit (2D)
    1. Computer vision on iOS — OpenCV for iOS

1. UI Graphics Framework — UIKit

UIKit

UIKit is a set of Objective-C apis that provide Objective-C encapsulation for line graphics, Quartz images, and color manipulation, as well as 2D drawing, image processing, and user-interface level animation.

UIKit includes classes such as UIBezierPath (draw lines, angles, ellipses, and other graphics), UIImage (display images), UIColor (color operations), UIFont, and UIScreen (provide font and screen information), as well as bitmap graphics environments and PDF graphics environments for drawing and rendering Operation functions, also provide support for standard view, also provide support for printing functions.

Relationship between UIKit and Core Graphics

In UIKit, the UIView class itself automatically creates a Graphics environment when drawing, the CGContext type of the Core Graphics layer, as the current Graphics drawing environment. When drawing can call UIGraphicsGetCurrentContext function to obtain the current graphics environment, such as:

- (void)drawRect:(CGRect)rect {
    // Drawing code
    NSLog(@"%s",__func__); / / 1. Get the context CGContextRef contextRef = UIGraphicsGetCurrentContext (); UIBezierPath * path = [UIBezierPath bezierPath]; //2. [path moveToPoint:CGPointMake(10, 10)]; / / the end [path addLineToPoint: CGPointMake (100, 100)]. // Set color [[UIColor whiteColor]setStroke]; //3. Add CGContextAddPath(contextRef, path.cgPath); CGContextStrokePath(contextRef); }Copy the code

This code is in a subclass of UIView UIGraphicsGetCurrentContext function called the current graphics environment, and then to add a path, the graphics environment, finally draw.

2. Core Animation framework

Core Animation

Core Animation is an Objective-C API that implements a high-performance composite engine and provides an easy-to-use programming interface to add smooth motion and dynamic feedback to the user UI.

Core Animation is the basis for UIKit to implement animations and transformations, and is also responsible for view composition. Custom Animation and fine-grained Animation control can be achieved using Core Animation to create complex layered 2D views that support Animation and transformation.

Core Animation is not part of the drawing system, but it is the infrastructure for hardware composition and manipulation of the displayed content. At the heart of this infrastructure are the Layer objects that manage and manipulate the displayed content. In iOS, each view corresponds to a layer object of Core Animation. Like a view, the layers are organized into a layer tree. One layer captures the view content as a bitmap that is easily manipulated by the graphics hardware. In most applications it is used as a way to manage views, but it is also possible to create separate layers into a hierarchy tree to display content that the view does not support.

OpenGL ES content can also be integrated with Core Animation content.

To animate with Core Animation, you can change the layer property values to trigger the execution of an Action object. Different Action objects implement different animations.

Core Animatio related base and subclasses

Core Animation provides the following set of classes that applications can use to provide support for different Animation types:

  • CAAnimation is an abstract public base class. CAAnimation uses CAMediaTiming and CAAction protocols to provide time (such as cycle, speed, number of repeats, etc.) and action behavior (start, stop, etc.) for animation.
  • CAPropertyAnimation is an abstract subclass of CAAnimation that provides support for an animation with a layer property specified by a key path;
  • CABasicAnimation is a concrete subclass of CAPropertyAnimation that provides simple insertion capabilities for a layer property.
  • CAKeyframeAnimation is also a concrete subclass of CAPropertyAnimation, providing key frame animation support.

3. Core Graphics & Quartz 2D

Core Graphics

Core Graphics is a set of C-based apis that support vector Graphics for drawing lines, shapes, patterns, paths, shavers, bitmap images, and PDF content.

Quartz 2D

Quartz 2D is a 2D rendering engine in Core Graphics. Quartz is resource and device independent, offering path rendering, anti-aliased rendering, razor fill patterns, images, transparent rendering and transparent layers, shadoving and shading, color management, coordinate conversion, font rendering, offscreen rendering, CREATION, display, and analysis of PDF documents.

Quartz 2D can be used with all graphics and Animation technologies such as Core Animation, OpenGL ES, AND UIKit.

Quartz 2D is drawn in paint mode.

Graph Context

The graphical environment used in Quartz 2D is also represented by a class CGContext.

In Quartz 2D, you can use a graphics environment as a drawing target. When drawing with Quartz 2D, all device-specific features are included in the particular type of graphics environment you are using, so by giving the same image manipulation function different image environments you can draw the same image on different devices, thus making image drawing device-independent.

Context is an abstract thing. It is not only a layer that can be drawn, but also contains parameters that can be set for the current layer, such as shadow, line thickness, draw mode, etc. This can be likened to a new Photoshop layer with the current Settings of strokes, colors, etc.

For mobile platforms, there are three common graphical environments Context:

  • A Bitmap Graphics context: Usually used to draw pictures or custom controls.

    • View Graphics Context: Automatically created by UIView, when you override THE UIView drawRect method, your content will be drawn against this Context.
    • Bitmap Graphics Context: Contents drawn in this Context are stored in a block of memory as bitmaps. Basically, you create memory for an image, and then you draw something in it, and the Context abstracts that memory into a Context.
  • A PDF Graphics context: Used to generate PDF files.

  • A layer context: Used for off-screen drawing.

The main classes offered by Quartz 2D include:
  • CGContext: represents a graphical environment;
  • CGPath: Use vector graphics to create paths that can be filled and stroked;
  • CGImage: represents a bitmap.
  • CGLayer: Used to represent a drawing layer that can be used for repeat and offscreen drawings;
  • CGPattern: used to represent Pattern, used for repeated drawing;
  • CGShading and CGGradient: for shading;
  • CGColor and CGColorSpace; For color and color space management;
  • CGFont, for drawing text;
  • CGPDFContentStream, CGPDFScanner, CGPDFPage, CGPDFObject,CGPDFStream, and CGPDFString are used to create, parse, and display PDF files.

4. Traditional cross-platform graphics framework — OpenGL ES

OpenGL ES

OpenGL ES is a multifunctional open standard c-based graphics library for embedded systems for visualization of 2D and 3D data. OpenGL is designed to transform a set of graphics calling functions to the underlying graphics hardware (GPU), which executes graphics commands to achieve complex graphics operations and calculations, thus enabling high-performance, high frame rate 2D and 3D rendering capabilities provided by the GPU.

The OpenGL ES specification itself does not define the draw surface and the draw window, so in order to use it, ios must provide and create an OpenGL ES render environment, create and configure a framebuffer to store the results of the draw command, and create and configure one or more render targets.

EAGL

The EAGLContext class provided by EAGL is used in iOS to implement and provide a rendering environment that maintains the state of the hardware used by OpenGL ES. EAGL is an Objective-C API that provides an interface for integrating OpenGL ES with Core Animation and UIKIT.

An EAGLContext object must be initialized before any OpenGL ES function can be called. Every thread in every IOS application has a current context that is used or changed when OpenGL ES is called.

EAGLContext class method setCurrentContext: used to set the current context of the current thread. EAGLContext’s class method currentContext returns the currentContext of the current thread. Before switching between two contexts of the same thread, the glFlush function must be called to ensure that previously committed commands are committed to the graphics hardware.

GLKit

You can use OpenGL ES in different ways to render OpenGL ES content to different targets: GLKit and CAEAGLLayer.

To create full-screen views or to integrate OpenGL ES content with UIKit views, you can use GLKit. When using GLKit, the GLKView class provided by GLKit itself implements the rendering goal and creates and maintains a framebuffer.

GLKit is a set of Objective-C classes that provide an object-oriented interface for using OpenGL ES to simplify the development of OpenGL ES applications.

CAEAGLLayer

In order to make OpenGL ES content part of a Core Animation layer, you can use the CAEAGLLayer as the rendering target, create a separate framebuffer and implement and control the entire drawing process yourself.

GLKit supports four key areas of 3D application development:

1) The GLKView and GLKViewController classes provide a standard OpenGL ES view and associated rendering loop. GLKView can be used as a rendering target for OpenGL ES content, and the GLKViewController provides control and animation for content rendering. View manages and maintains a framebuffer. Applications simply draw on the framebuffer.

2) GLKTextureLoader provides a way for applications to automatically load texture images from sources in a variety of image formats supported by IOS into the OpenGL ES image environment, with the ability to perform appropriate transformations, and supports both synchronous and asynchronous loading modes.

3) mathematical operation library, provide vector, matrix, quaternion implementation and matrix stack operation OpenGL ES 1.1 function.

4) The Effect class provides a standard implementation of common shading effects. Ability to configure effects and associated vertex data, then create and load the appropriate shaders. GLKit includes three configurable shading classes: GLKBaseEffect implements the key lighting and material patterns in the OpenGL ES 1.1 specification. GLKSkyboxEffect provides a skybox effect implementation. GLKReflectionMapEffect includes reflection mapping support on the basis of GLKBaseEffect.

5. Apple’s newest graphics framework — Metal

The Metal framework supports GPU hardware acceleration, advanced 3D graphics rendering, and big data parallel computing. Advanced and streamlined apis are provided to ensure fine-grain and low-level control over organizational structure, program processing, graphical presentation, operational instructions, and management of instruction-related data resources. The core goal is to reduce CPU overhead as much as possible, leaving most of the load generated at runtime to the GPU

Alternatives to Metal when writing a rendering engine based on the underlying graphics API are OpenGL and OpenGL ES. OpenGL supports almost every platform, including OSX, Windows, Linux, and Android, and has a wealth of tutorials, books, and best practice guides. Currently, Metal has very limited resources and is limited to iphones and ipads with 64-bit processors. On the other hand, because of OpenGL’s limitations, its performance is not superior to that of Metal, which is designed to solve these problems.

If you want a high-performance parallel computing library on iOS, the answer is very simple. Metal is the only option. OpenGL is a proprietary framework on iOS, and Core Image (which uses OpenGL) is neither powerful nor flexible enough for such tasks.

6. Core Image, apple’s filter frame for images

Core Image is a new Image processing framework added to the iOS platform in iOS5. It provides powerful and efficient Image processing functions for the operation and analysis of pixel-based images. It has many powerful built-in filters (currently more than 180 kinds). These filters provide a wide variety of effects and can be added together to create powerful custom effects through Filter chains.

A filter is an object that has many inputs and outputs and performs some transformations. For example, a blur filter may require an input image and a blur radius to produce a suitably blurred output image.

A filter chain is a network of filters linked together so that the output of one filter can be the input of another. In this way, elaborate effects can be achieved.

After iOS8, CIFilter can be customized to meet service requirements.

Core Image apis fall into three main categories:

  • CIImage A class that holds image data, which can be created from UIImage, image file or pixel data, including unprocessed pixel data.
  • CIFilter represents the applied filter, the class in the framework that details image attributes. It operates on all pixels, with key-value Settings to determine the extent of the operation.
  • CIContext represents the context, which is used in Core Graphics and Core Data for rendering and handling managed objects. The context of Core Image is also a concrete object that implements Image processing. You can get information about the picture from it.

Another advantage of Core Image is that it can be processed by CPU or GPU as required.

/ / objects created based on the CPU CIContext (default is based on GPU, need additional CPU Settings) context = [CIContext contextWithOptions: [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]forKey:kCIContextUseSoftwareRenderer]]; // Create a GPU-based CIContext object context = [CIContext contextWithOptions: nil]; / / objects created based on GPU CIContext EAGLContext * eaglctx = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]; context = [CIContext contextWithEAGLContext:eaglctx];Copy the code

7. GPUImage, a third-party filter solution for videos

GPUImage advantage: Minimum iOS 4.0 support, iOS 5.0 after the support for custom filters. On low-end models, the GPUImage does better. (I didn’t compare this with a real device, according to GPUImage’s home page.) GPUImage performs better in video processing. GPUImage code complete disclosure, transparency. You can customize more complex pipeline operations to suit your business needs. High customization.

8. Game engine –Scene KitAnd (3 d)Sprite Kit (2D)

For developers looking for a game engine, Metal is not the best choice. Apple’s official Scene Kit (3D) and Sprite Kit (2D) are better choices. These apis provide higher-level game engines, including physics simulation.

There are also more full-featured 3D engines, such as Epic’s Unreal Engine or Unity, both of which are cross-platform. With these engines, you can benefit from Metal without having to use Metal’s apis directly.

2D rendering — SpriteKit

SpriteKit allows developers to develop high-performance, power-saving 2D games. In iOS 8, we’ve added a number of new enhancements that will make the 2D gaming experience even more exciting. These new technologies help make game characters’ movements more natural and make it easier for developers to add force fields, detect collisions and generate new lighting effects.

3D rendering — SceneKit

SceneKit is designed for casual 3D games and allows developers to render 3D game scenes. SceneKit has a built-in physics engine, particle generator, and a variety of easy-to-use tools that make it easy and quick to write actions for 3D objects. Not only that, but it’s also fully integrated with SpriteKit, so developers can add SpriteKit materials directly to their 3D games.

9. Computer Vision on iOS — OpenCV for iOS

OpenCV’s API is C++. It is made up of different modules that contain an extremely wide range of methods, from the image color space at the bottom to the machine learning tools at the top. Here is a starting point for downloading PDF documents.

Using the C++ API isn’t something most iOS developers do every day, you need to use objective-c ++ files to call OpenCV functions. In other words, you can’t call OpenCV functions in Swift or Objective-C. This OpenCV iOS tutorial tells you to simply change the file name extension to.mm for all classes that use OpenCV, including the view controller class. That might work, but it’s not a good idea. The right way to do it is to write an objective-C ++ wrapper around all the OpenCV functionality that you’re going to use in your app. These Objective-C++ packages convert OpenCV’s C++ API into a secure Objective-C API that can be easily used across all objective-C classes.

By going with the wrapper route, your project can only call C++ code in those wrappings, avoiding many headaches such as untraceable compilation errors caused by simply changing file name extensions to refer to a C++ header file in the wrong file.

OpenCV declares the namespace CV, so OpenCV classes are prefixed with CV ::, such as CV ::Mat, CV ::Algorithm, and so on. You can also use using namespace CV in.mm files to avoid using the CV :: prefix before a bunch of class names. However, you must use namespace prefixes before some class names, such as CV ::Rect and CV ::Point, because they conflict with Rect and Point defined in mactypes.h. Although this is just a matter of preference, I prefer to use CV :: everywhere to maintain consistency.

Generally speaking OpenCV is based on the CPU, the relevant data and support is the most perfect. Of course, there are GPU-based modules, but the interface is very bad, and quite a few don’t support floating-point types (not even common ones like Histogram and Integral); For example, when a threshold judgment is encountered, it must be passed back to the CPU, because the GPU functions are processed in parallel. Every time an algorithm module is rewritten, the performance test is performed. Sometimes it is exciting, sometimes it is even slower than the CPU. Details can be found here.

10. References

  • Core Animations

https://www.sohu.com/a/203987045_468740 https://blog.csdn.net/huangznian/article/details/42919221 https://www.jianshu.com/p/446a6b72f981 https://www.jianshu.com/p/439e158b44de

  • Metal

https://juejin.cn/post/6844903494072139789 https://www.jianshu.com/p/ce53d0178f20 https://blog.csdn.net/pizi0475/article/details/50232029 https://baike.baidu.com/item/Metal/10917053?fr=aladdin https://zhuanlan.zhihu.com/p/24623380?utm_source=tuicool&utm_medium=referral

  • Core Image

https://objccn.io/issue-21-6/ http://colin1994.github.io/2016/10/21/Core-Image-OverView/?utm_source=tuicool&utm_medium=referral https://blog.csdn.net/jingcheng345413/article/details/54967640 https://www.cnblogs.com/try2do-neo/p/3601546.html

  • Core Graphics

https://www.jianshu.com/p/e7a50dcbe7c8 https://www.jianshu.com/p/55cc1587e618 https://www.jianshu.com/p/494c57f49479 https://my.oschina.net/flyfishbay/blog/1504698

  • OpenCV

https://blog.csdn.net/zhonggaorong/article/details/78191514 http://www.opencv.org.cn/forum.php?mod=viewthread&tid=33549 https://blog.csdn.net/kelvin_yan/article/details/41804357 https://blog.csdn.net/sinat_31135199/article/details/53053188 https://blog.csdn.net/liyuefeilong/article/details/46292339

  • GPUImage

https://blog.csdn.net/fanbird2008/article/details/51707430

  • other

https://blog.csdn.net/goohong/article/details/40743883