Unity UGUI Principles part 1: Canvas rendering mode
The target
Understand the different UI Render modes
Use environment and version
Windows 7 Unity 5.2.5
Render Mode
There are three ways to render UI
-
Screen Space – Overlay: Screen Space – Overlay
-
Screen Space — Camera
-
The coordinates of the World Space
Screen Space – Overlay
There is no Camera reference in this mode, and the UI is displayed directly on top of any graphics
-
1.Pixel Perfect: Can make the image sharper, but there is an additional performance overhead, if there is a lot of UI animation, the animation may not be smooth
-
2.Sort Order: depth value. The higher the value, the more advanced the display
Screen Space – Camera
Using a Camera as a reference, place the UI plane at a certain distance in front of the Camera. Because it is a reference Camera, the UI plane will automatically adjust its size when the screen size, resolution and Camera cone change. If the object in the Scene (GameObject) is closer to the camera than the UI plane, it will block the UI plane.
-
E.g. < 1 > The Render Camera is used to Render a picture
-
The Plane Distance between the Camera and the Camera
-
Edit->Project Setting->Tags and Layers->Sorting Layers
-
4.Order in Layer: the Order under the sorting Layer of Canvas. The higher the value is, the higher the display will be
World Space
Display the 3D UI by treating the object as a GameObject in the coordinates of the world, i.e. as a 3D object
- 1.Event Camera: a Camera that handles UI events (Click, Drag)
See Unity — Manual: Canvas
Docs.unity3d.com/Manual/clas…
Unity UGUI Principles part 2: Canvas Scaler scaling core
The target
- 1. Understand different UI Scale modes
- Pixels Per Unit
- 3.Canvas Scale Factor
- 4.Reference Resolution (preset screen size)
- 5. Relation and algorithm between Screen Size and Canvas Size
Use environment and version
Window 7
Unity 5.2.4
Canvas Scaler
Canvas Scaler is a Compoent in Unity UI system that controls the overall size and pixel density of UI elements. The scaling ratio of Canvas Scaler affects elements under Canvas, including font size and image boundary.
Size
-
Reference Resolution: Preset screen size
-
Screen Size: current Screen Size
Canvas Size: Canvas Rect Transform width and height
Scale Factor
Docs.unity3d.com/ScriptRefer…
Use to scale the entire Canvas, and adjust Canvas Size as Screen Size
Let’s start with an official code
CanvasScaler.cs
protected void SetScaleFactor(float scaleFactor)
{
if (scaleFactor == m_PrevScaleFactor)
return;
m_Canvas.scaleFactor = scaleFactor;
m_PrevScaleFactor = scaleFactor;
}
Copy the code
The code shows that the Canvas Scaler scales all elements under the Canvas by setting the Scale Factor under the Canvas
When Scale Factor is 1, Screen Size(800600), Canvas Size(800600), picture Size 1 times
When Scale Factor is 2, Screen Size(800600), Canvas Size(400300), image Size is 2 times
When Scale Factor is 2, Scale Factor will adjust the Size of the entire Canvas and make it the same as Screen Size. After calculation, the Canvas Size will be enlarged by 2 times, which is exactly equal to Screen Size, and the bottom picture will be enlarged by 2 times
UI Scale Mode
Constant Pixel Size Canvas Size is always equal to Screen Size, using Scale Factor to Scale all UI elements directly
-
- Scale Factor: Scales all elements under this Canvas using this Factor
-
- Reference Pixels Per Unit:
In Pixels Per Unit, a Unit in the world coordinates in this Sprite is composed of Pixels
The test picture used here is a graph file with original size of 100*100, which is collectively called the test picture here
For example, there is an 11 Cube in the scene, and a Sprite image is designated as the test image. The Transform Scale of both is 1. When Pixels Per Unit=100, each Unit consists of 100 Pixels, Sprite is 100100 Pixels, so the Sprite in the world coordinates is 100/100 * 100/100 = 1*1 Unit
(Left: Cube, right: Sprite)
In Pixels Per Unit=10, the Sprite is 100100 Pixels. In Pixels Per Unit, the Sprite is 100/10 * 100/10 = 1010
(Left: Cube, right: Sprite)
Conclusion:
- A unit in Unity is equal to 100 Pixels
- From this, the formula can be derived:
Sprite size = Pixels/Pixels Per Unit
In Pixels Per Unit, a pixel in a Sprite is converted into a pixel in the UI
Image.cs
public float pixelsPerUnit { get { float spritePixelsPerUnit = 100; if (sprite) spritePixelsPerUnit = sprite.pixelsPerUnit; float referencePixelsPerUnit = 100; if (canvas) referencePixelsPerUnit = canvas.referencePixelsPerUnit; return spritePixelsPerUnit / referencePixelsPerUnit; }}Copy the code
As can be seen from the above official code, Image calculates the new pixelsPerUnit through spritePixelsPerUnit/referencePixelsPerUnit
Image.cs
public override void SetNativeSize()
{
if (overrideSprite != null)
{
float w = overrideSprite.rect.width / pixelsPerUnit;
float h = overrideSprite.rect.height / pixelsPerUnit;
rectTransform.anchorMax = rectTransform.anchorMin;
rectTransform.sizeDelta = new Vector2(w, h);
SetAllDirty();
}
}
Copy the code
When setting the Image size, set the width/height/pixelsPerUnit
To implement this, create a Canvas argument as follows
Create an Image under Canvas and set Sprite as the test Image with the following parametersFour different tests are performed here: After modifying Reference Pixels Per Unit and Pixels Per Unit, click the Set Native Size of Image Compoent to Set the original Size of the Image, so as to see the changes in the Image.
■ As you can see from the above table, the preset size of the image changes when the value changes
■ The formula can be derived from this
UI size = Pixels/(Pixels Per Unit/Reference Pixels Per Unit)
Scale With Screen Size: Scale by Reference Resolution
-
Reference Resolution: Preset screen size
-
Screen Match Mode: Zoom Mode
Let’s start with the official algorithm
CanvasScaler.cs
Vector2 screenSize = new Vector2(Screen.width, Screen.height); float scaleFactor = 0; switch (m_ScreenMatchMode) { case ScreenMatchMode.MatchWidthOrHeight: { // We take the log of the relative width and height before taking the average. // Then we transform it back in the original space. // the reason to transform in and out of logarithmic space is to have better behavior. // If one axis has twice resolution and the other has half, It should even out if widthOrHeight value is at 0.5. // In normal space the average would be (0.5 + 2) / 2 = 1.25 // In logarithmic space the average is (-1 + 1) / 2 = 0 float logWidth = Mathf.Log(screenSize.x / m_ReferenceResolution.x, kLogBase); float logHeight = Mathf.Log(screenSize.y / m_ReferenceResolution.y, kLogBase); float logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight); scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage); break; } case ScreenMatchMode.Expand: { scaleFactor = Mathf.Min(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y); break; } case ScreenMatchMode.Shrink: { scaleFactor = Mathf.Max(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y); break; }}Copy the code
A. Expand: Expand the Canvas Size by width or height so that it is higher than the Reference Resolution
scaleFactor = Mathf.Min(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y);
It means to calculate the length and width respectively, and “Screen Size” takes up the proportion of “Reference Resolution”
For example, Reference Resolution is 1280720 and Screen Size is 800600
ScaleFactor Width: 800/1280 = 0.625
ScaleFactor Height: 600/720 = 0.83333
Apply ScaleFactor formula: Canvas Size = Screen Size/ScaleFactor
Canvas Width: 800/0.625 = 1280
Canvas Height: 600/0.625 = 960
Canvas Size 1280*960, height changed from 720 to 960, maximum zoom (show all elements)
B. Shrink: Reduce the Canvas Size to a width or height smaller than the Reference Resolution, calculated as follows
scaleFactor = Mathf.Max(screenSize.x / m_ReferenceResolution.x, screenSize.y / m_ReferenceResolution.y);
“Screen Size” accounts for the proportion of “Reference Resolution”
For example, Reference Resolution is 1280720 and Screen Size is 800600
ScaleFactor Width: 800/1280 = 0.625
ScaleFactor Height: 600/720 = 0.83333
Apply ScaleFactor formula: Canvas Size = Screen Size/ScaleFactor
Canvas Width: 800/0.83333 = 960
Canvas Height: 600/0.83333 = 720
The Canvas Size is 960*720, and the width has changed from 1280 to 960, reducing to the maximum extent
C. Match Width or Height: Mix the scale according to Width or Height
float logWidth = Mathf.Log(screenSize.x / m_ReferenceResolution.x, kLogBase);
float logHeight = Mathf.Log(screenSize.y / m_ReferenceResolution.y, kLogBase);
float logWeightedAverage = Mathf.Lerp(logWidth, logHeight, m_MatchWidthOrHeight);
scaleFactor = Mathf.Pow(kLogBase, logWeightedAverage);
Copy the code
Take logarithms of ScaleFactor Width and Height, and then mix them evenly. Why not use March to mix Width and Height? Let’s compare
Assume that Reference Resolution is 400300 and Screen Size is 200600
The Reference Resolution Width is twice the Screen Size Width
The Reference Resolution Height is 0.5 times the Screen Size
It will look something like this
When March is 0.5, ScaleFactor should be 1 (flattened)
ScaleFactor Width: 200/400 = 0.5
ScaleFactor Height: 600/300 = 2
General mix:
ScaleFactor = March * ScaleFactor Width + March * ScaleFactorHeight
ScaleFactor = 0.5 * 0.5 + 0.5 * 2 = 1.25
Logarithmic mixing:
LogWidth: log2(0.5) = -1
LogHeight: log2(2) = 1
LogWeightedAverage: 0
ScaleFactor: 20 = 1
ScaleFactor is usually blended at 1.25 and logarithmic blended at 1, and it is clear that logarithmic mixing can be used to correct the size more perfectly
Constant Physical Size
Zoom through Dpi(Dots Per Inch Per Inch) of hardware device
- Physical Unit: Physical Unit type
Fallback Screen DPI: indicates the standby DPI. This value is used when the device DPI cannot be found
3.Default Sprite DPI: Default image DPI
float currentDpi = Screen.dpi; float dpi = (currentDpi == 0 ? m_FallbackScreenDPI : currentDpi); float targetDPI = 1; Switch (m_PhysicalUnit) {case unit.price: targetDPI = 2.54F; break; Millimeters: targetDPI = 25.4f; break; case Unit.Inches: targetDPI = 1; break; case Unit.Points: targetDPI = 72; break; case Unit.Picas: targetDPI = 6; break; } SetScaleFactor(dpi / targetDPI); SetReferencePixelsPerUnit(m_ReferencePixelsPerUnit * targetDPI / m_DefaultSpriteDPI);Copy the code
conclusion
■ ScaleFactor is the ratio of “current hardware DPI” to “target unit”
■ ReferencePixelsPerUnit should calculate the new value with the current Dpi, and then pass it into Canvas to calculate the size, the formula is as follows:
New Reference Pixels Per Unit = Reference Pixels Per Unit * Physical Unit/Default Sprite DPI
UI size = Original (Pixels)/(Pixels Per Unit/New Reference Per Unit
Docs.unity3d.com/Manual/clas…
Unity UGUI Principles part 3: RectTransform
The target
1. Understand RectTransform Component 2.Anchor 3.Pivot 4.Blue Print Mode and Raw Edit Mode
Use environment and version
Window 7
Unity 5.2.4
RectTransform
RectTransform is the 2D Component of Transform. Transform represents a single point and RectTransform represents a 2D rectangle (UI space). If both parent and child objects have RectTransforms, The RectTransform defines the position, rotation, and size of the UI elements
Anchor point
Anchor point (alignment point) of the object. If both father and son have RectTransform, the child object can be aligned to the parent object according to the Anchor, which can be divided into Min and Max positions. There are 4 triangles around the object in the figure below
When we use the mouse to select four triangles to adjust the Anchor, a proportional message will appear intimately, which is the scaling ratio of the child object in the parent object When there is a picture under Canvas, both Anchor Min and Anchor Max are (0.5, 0.5), as shown in the left picture below
If the Anchor Min is adjusted to (0.3, 0.5) and the Anchor Max to (0.5, 0.7), as shown in the figure on the right below
Pos X, Pos Y, Width, Height will change to Left, Top, Right, Buttom
Because when The Anchor is at the same point, the coordinates and size of the object are displayed; when the Anchor is not at the same point (a rectangle will be formed at this time), the display will be the Anchor rectangle filling the space, as shown in the figure below (P.S. When we move the object, the current distance relationship with Anchor will be displayed intimately.
There are 5 pictures under Canvas, Anchor Min and Anchor Max are both (0.5, 0.5), and the position of the object will be aligned to the center of the parent object. When the size of the parent object changes, the situation is as follows
There is a picture under Canvas, Anchor Min and Anchor Max are both (0.0, 1.0), and the position of the object will be aligned to the upper left corner of the parent object. When the size of the parent object changes, the situation is as follows: the object will be fixed at the upper left corner
There is a picture under Canvas, Anchor Min is (0.0, 0.0), Anchor Max is (1.0, 0.0), the position of the object will be aligned to the lower left and right corner of the parent object, when the size of the parent object changes, the situation is as follows, the width of the object will change with the parent object
As can be seen from the above examples, the child object will be aligned to the parent object according to the set Anchor. When the size of the parent object changes, the child object will be updated through the Anchor. It is mentioned above that when we click four triangles to adjust the Anchor, the picture will show the proportion information intimately, I believe experienced people must know the meaning of the proportion. Parent Size (400, 350); Parent Size (400, 350); Parent Size (400, 350)
Image Size (120, 105)
Anchor Min is (0.2, 0.5), Anchor Max is (0.5, 0.8)
Parent Size Half time value Parent Size (200, 175)
Image Size (60, 52.5) Image Size Width =400 * 50% * 30% = 60 Image Size Height =350 * 50% * 30% = 52.5 Anchor Min is (0.2, 0.5), Anchor Max is (0.5, 0.8)
It can be seen from the above that after the parent object is shrunk by 2 times, the parent object updates the child object through the Anchor proportion of the child object. In this way, we can achieve different screen resolutions to automatically change the size and position of UI Anchor Presets
Click on the upper left corner of RectTransform to open the Anchor Presets tool, which lists the commonly used Anchors that can be used quickly. Hold Shift to change them with Pivot. Press Alt to change the Pivot object’s own fulcrum along with its position, affecting the rotation, scaling and position of the object. To change the UI Pivot, you must first open the Pivot button in the control panel, as shown below
When the Pivot (0.5, 0.5)When the Pivot (0, 1)
Blue Print Mode, Raw Edit Mode
The Blue Print Mode ignores the Local Rotation and Local Scale of the object, making it easy to adjust the object to its original Rotation and size When the Pivot and Anchor are adjusted in The Inspector, the object remains in its current position and size as shown below. Note the value section
Adjust Pivot in InspectorAdjust Anchor in Inspector■ Unity — Manual: Basic Layout
Docs.unity3d.com/Manual/UIBa…
Attract hijack ト ト Attracts hijack (Inaudible)
Tsubakit1. Hateblo. Jp/entry / 2014 /…
Event System Manager Events and triggers
The target
1.Event System 2.Input Module Input control 3.Graphic Raycaster 4
Use environment and version
Window 7
Unity 5.2.4
Event System
When the UI is created, Unity will automatically help us create the Event System Object. This Object transmits events to the Object based on mouse, touch and keyboard input methods. There are three components under the Object, respectively. Event System Manager, Standalone Input Module, Touch Input Module
1.Event System Manager
Control all events and coordinate the mouse, touch, Input Module with the selected Object. Each “Update” Event System will receive all calls. When you press Play and select the Event System object, the inspector displays information about the selected object, its location, and the Camera that received the Event
.
First Selected
The first Object to be selected during execution. For example, after InputField is selected, press Play to Force the cursor on the InputField
Send Navigation Events
Whether to enable the UI navigation function, which can be controlled by using up, Down, Left, Right, Cancel(Esc), and Sumit(Enter) on the keyboard
For example, if there are multiple menu buttons on the screen, we can set Navigation Options on the buttons with the Explicit method to specify which object is selected when “up”, “down”, “left” or “right” of the keyboard are pressed
Select On Up: The object to be selected when the “Up” key is pressed. Down, Left, and Right are not described
Visualize Buttin: Click Visualize to Visualize the yellow line where the object is pointing
Drag Threshold
Drag Event: The lower the sensitivity, the more sensitive it is
2.Standalone Input Module
Computer input control module, which mainly affects the input of mouse and keyboard, uses Raycasters in Scene to calculate which element is clicked, and transmits the Event
Horizontal Axis
The Horizontal Axis in the Input Module can be set as the value in the Input Manager. The Vertical Axis, Submit Button and Cancel Button will not be described further
Input Actions Per Second
The maximum number of buttons and clicks that can be entered per second
Repeat Delay
Delay of repeated input
Events execute the complete process
Keyboard input
1.Move Event: Verify the input manager by entering axis, left, right, up, down buttons and passing them to the Selected Object
Submit and Cancel Button: When the object has been Preesed, enter the Submit and Cancel buttons through the Input Manager validation and pass them to the Selected Object
Mouse input
1. If it is new, press
A. Send a PointerEnter Event
B. Send PointerPress events
C. Save drag
E. Send BeginDrag Event
F. Set the Selected Object in Event System to the pressed object
2. If it is continuously pressed (Drag)
A. Handle movement correlation
B. Send a Drag Event
C. PointerEnter events and PointerExit events that cross other objects when handling Drag
3) If it is released (mouse release)
A. Send a PointerUp event
B. Send a PointerClick Event if the mouse releases the same object as when it was pressed
C. If there is a drag related staging, send a Drop event
D. Send an EndDrag Event
4. Mouse middle key scroll wheel sends Scroll Event
3.Touch Input Module
Touch input module, mainly used on mobile devices, can respond by Touch, Drag, using Raycasters in the Scene to calculate which element is touched, and pass events
Events execute the complete process
As with the Standalone Input Module’s mouse Input, think of the click below as a touch
4.Event System Triggers the flow
1. User input (mouse, touch, keyboard)
2. Use the Event System Manager to decide on the Standalone or Touch Input Module
3. After deciding which Input Module to use, calculate which element is in the Scene through Raycasters
4. Send the Event
Graphic Raycaster
Component location: Unity Menu Item → Component → Event → Graphic Raycaster
When building one of the components of the Canvas object, Raycaster will observe all the graphics under the Canvas and detect whether they have been hit. In fact, Raycaster is to project an invisible line and determine whether there is a collider on the line after specifying the location and direction. This has been officially explained in detail. This is used to determine whether the UI graph is selected
Ignore Reversed Graphics:
A graph facing away from the screen. Whether to ignore the graph in X-ray detection
For example: when the Y axis of the graph is rotated 180 times, the graph is facing away from the picture. If there is a check mark, the graph will be ignored and not detected
Blocked Objects, Blocking Mask:
When Canvas Component Render Mode uses World Space or Camera Space, 3D or 2D objects in front of the UI will block ray transfer to the UI graphics
Blocked Objects Blocks the ray’s Object type
Blocking Mask check Layer will block rays
For example: if there is a Button on the screen that deliberately overlaps Cube positions, now clicking on the overlaps will reveal that the Button will still be triggered
If the Cube Layer is changed to Test01, Blocked Objects is set to Three D, and Test01 is only checked as a Blocking Mask. Click the overlapping area again, it will be found that Cube will block ray detection, and the button will not receive ray at this time. And of course there will be no reaction
Physics Raycaster Component location: Unity Menu Item → Component → Event → Physics Raycaster
The 3D GameObject(which must have a Collider Component) in the Scene is detected by the Camera. Objects that implement Event Interfaces will receive Message notifications. For example, the 3D GameObject can receive a drop Event or drag Event… . Please click on me to see more events
Let’s look at examples
1. Create an EventSystem and process events
Object location: Unity Menu Item → GameObject → UI → EventSystem
2. Add Physics Raycaster Component under Camera to observe rays
3. Implement Event Interfaces. There are two ways to implement them: create Script Interfaces or use the Event Trigger Component
The first creates Script direct implementations Interfaces
A. Create a Script that implements Event Interfaces
EventTest.cs
using UnityEngine;
using UnityEngine.EventSystems;
public class EventTest : MonoBehaviour.IPointerDownHandler
{
public void OnPointerDown(PointerEventData eventData){ print(gameObject.name); }}Copy the code
Line.2: importing into the namespace using unityEngine. EventSystems
Line. 4: Inherits Event Interfaces, here is IPointerDownHandler, please click on me for more events
Line.6 ~8: implementation method passing PointerEventData as event data
B. Create a 3D object (this is called a Cube) and add a BoxCollider Component
C. Put the Script into the Cube. The Inspector displays Intercepted Events information, which shows the monitored Events
D. Clicking the Cube will notify the OnPointerDown method and pass in the event information
The second uses the Event Trigger Component implementation Interfaces
A. Create a Script that implements methods to receive Event Trigger notifications
EventTriggerTest.cs
using UnityEngine;
using UnityEngine.EventSystems;
public class EventTriggerTest : MonoBehaviour
{
//BaseEventData Dynamically passes in event information
public void OnPointerDown(BaseEventData eventData)
{
print("OnPointerDown--BaseEventData");
}
/ / pure calls
public void OnPointerDown()
{
print("OnPointerDown--non");
}
/ / to int
public void OnPointerDown(int i)
{
print("OnPointerDown--int"); }}Copy the code
Line.2: importing into the namespace using unityEngine. EventSystems
Line. 6~8: implementation methods, three implementations here
B. Create a 3D object (this is called a Cube) and add a BoxCollider Component
C. Place Script under Cube
D.c. ube adds an Event Trigger Component, which mainly receives events from the Event System and calls the events with implementation
Component location: Unity Menu Item → Component → Event → Event Trigger
E. Click Add New Event Type to select the Event Type to be implemented. This section uses PointerDown as an example
F. UnityEvents will be added, which is a way to set the methods and properties to be notified when an Event is triggered by the editor. For details, please refer to the following
· Write anything you want — Unity: Use UnityEngine.Events to make your application more flexible and stable
Unity – Manual: UnityEvents
Click the “+” button and drag in the Scene GameObject to be notified. Unity Event will look for all Public methods and properties on this GameObject. You can add “Notification methods” and “pre-modified properties” when Event is triggered.
G. gameobject drags into Cube and notifies methods to set the three methods in Script
H. Clicking on Cube triggers the PointerDown, notifying the three methods in the Script
4. Implementation points to note:
■ Scene must have an EventSystem GameObject
■ Camera must have Physics Raycaster Component
■ 3D GameObjects must have a Collider Component
■ Implement Event Interfaces either by creating Script Interfaces or by using Event Trigger components. The method of using Event Trigger can be set by the editor, and the “notification method” and “modify attribute” can be set when the Trigger is triggered, which is more flexible
Physics 2D Raycaster Component location: Unity Menu Item → Component → Event → Physics 2D Raycaster
The only difference is that the Physics 2D Raycaster detects a 2D GameObject in the Scene, which must have a Collider2D Component
Postscript Through the relationship between Raycaster and input methods, we understand the triggering process of the Event System, and also know how to implement and apply the Event. No matter 3D, 2D or UI objects, we can easily apply the Event, greatly improving the development speed and simplifying the syntax. It is a very convenient function
■ Unity — Manual: Event System
Docs.unity3d.com/Manual/Even…
■ Unity – Manual: UnityEvents
Docs.unity3d.com/Manual/Unit…
S Unity – Raycasting
Unity3d.com/cn/learn/tu…
■ Random words · Random Words — Unity: Use UnityEngine.Events to make your application more flexible and stable
Godstamps. Blogspot. Tw / 2015/10 / uni…
Unity UGUI Principles (5) : Auto Layout
The target
Layout Element Size 3.Horizontal, Vertical, Grid Layout Group Element arrangement 4.Content Size and Aspect Ratio Fitter size control
Use environment and version
Window 7
Unity 5.2.4
Auto Layout System
Auto Layout System is based on Rect Transform Layout System, automatically adjust the size, position, space of one or more elements. There are Layout Controllers and Layout Elements. A simple Auto Layout is described as follows.
Layout Elements (Child objects)
After selecting the UI, you can switch to Layout Properties at the bottom of the Inspector to see the information
Layout Controllers assign each child object to a Layout Element size based on the following basic principles
First, assign the Minimum Size
If there is enough space, assign Preferred Size
If you have extra space, assign the Flexible Size
The following image shows how the width of the image increases (theory is covered here, implementation is left to later)
Start by assigning the Minimum Size (300, red)
If there is enough space, assign Preferred Size (300 to 500, green)
If there is extra space, allocate Flexible Size: 1 (500~700, blue)
The special one is Flexible, which represents the proportion of the whole size. If the Flexible is set to 0.3 and 0.7 for two objects in Layout, the proportion will become the following figure (3:7).
Note also that the Text and Image components are automatically assigned Preferred Size based on content Size
Layout Controllers (parent objects) Layout Group
Instead of controlling the size of the Layout Controllers themselves, we control the size and position of the child objects. In most cases, we allocate the appropriate space based on the minimum, preferred, and flexible sizes of each element. Layout groups can also be nested, including Horizontal, Vertical, and Grid
Horizontal Layout Group
Row child objects horizontally
Component location: Unity Menu Item → Component → Layout → Horizontal Layout Group
Padding: Fills the internal space
Spacing: Spacing between each element
Child Alignment: Alignment of Child objects when the space is not fully filled
Child Force Expand: Forces the Child objects to fill the space
Understand each parameter through examples:
A. Open New Scene Unity Menu Item → File → New Scene
B. Add a Canvas Unity Menu Item → GameObject → UI → Canvas
C.Canvas add empty objects as Layout Controllers
D. Add Horizontal Layout Group Component Unity Menu Item → Component → Layout → Horizontal Layout Group to parent object
E. Create 5 buttons (child objects) under the parent object
F. The Rect Transform Component of the Button cannot be adjusted because we have already allocated space through the Horizontal Layout Group. The Rect Transform shows which Layout Group is currently controlled by
G. Adjust the Padding value to see the filled area
H. Adjust Spacing to show the Spacing
I. Next we add a Layout Element Component to 5 buttons to override the preset size, which is used to manually set the size Component position of each Element: Unity Menu Item → Component → Layout → Layout Element
J. Uncheck the Horizontal Layout Group’s Child Force Expand Width. Instead of forcing Child objects to fill the extra space, set the Horizontal Layout Group’s Child Force Expand Width manually through the Layout Element
K. How does the Horizontal Layout Group get the Layout Element size to assign child objects
■ Review the Layout Elements section above if you are not sure
First, assign the Minimum Size
If there is enough space, assign Preferred Size
If you have extra space, assign the Flexible Size
■ Change the Layout Element Min Width of each Button to 20, 30, 40, 50, or 60. No extra efficient space is allocated
Change the Horizontal Layout Group’s Child Alignment to see the element Alignment
Parent object Layout Properties Min Width = 5 button Width (20+30+40+50+60=200) + Spacing(40) + Padding Left, Right(20) = 260
■ Now adjust the value of the Layout Element of the first Button as shown in the figure below
I’m going to set the Preferred Width to 100
1. Assign the Minimum Size(20)
2. If the space is sufficient, the remaining Preferred sizes (20-100 Spaces) are allocated, as shown below
■ Now adjust the value of the Layout Element of the first Button as shown in the figure below
Set the Flexible Width to 1
1. Assign the Minimum Size(20)
2. If there is enough space, the remaining Preferred Size (20~100 space) will be allocated.
3. If there is additional space, allocate the remaining Flexible Size, as shown below
■ Now check the Horizontal Layout Group Child Force Expand Width to Force the Child object to fill
1. Assign the Minimum Size(20)
2. If there is enough space, the remaining Preferred Size (20~100 space) will be allocated.
3. If there is extra space, allocate the Flexible Size and Child Force Expand Width of the remaining elements
The Flexible Size and Child Force Expand Width will be used to allocate the remaining Preferred sizes
How does the Horizontal Layout Group get the Layout Element size to assign child objects
###Vertical Layout Group
Horizontal Layout Group: Horizontal Layout Group: Horizontal Layout Group: Horizontal Layout Group: Horizontal Layout Group: Horizontal Layout Group
Component location: Unity Menu Item → Component → Layout → Vertical Layout Group
###Grid Layout Group
Grid arrangement of child objects
Component location: Unity Menu Item → Component → Layout → Grid Layout Group
Padding: Fills the internal space
Cell Size: width and height of each element
Spacing: Spacing between each element
Please look carefully at the number of elements in the Start Corner
Start Axis: “horizontal” or “vertical” alignment, please look carefully at the element number
Child Alignment: Alignment of Child objects when the space is not fully filled
Constraint: array constraints
Flexible: Automatically flexibly arranges according to size
Fixed Column Count: Limit the number of rows (straight)
Fixed Row Count: Fixed Row Count
Layout Fitter
Controls the Size of the Layout Controllers themselves, depending on the child objects or the Size Ratio set. Content Size Fitter and Aspect Ratio Fitter are divided into two types
Content Size Fitter
Controls the size of the Layout Controllers themselves, depending on the Minimum or Preferred size of the child objects, and can change the zoom direction through Pivot
Unity Menu Item → Component → Layout → Content Size Fitter
Horizontal, Vertical Fit: Horizontal, Vertical Fit adjustment
None is not to adjust
Min Size adjusts to the Minimum Size of the child object
The Preferred Size adjusts according to the Preferred Size of the child object
Understand through examples:
If we now have a requirement, we need to scale the “parent object size” according to the “child object size” as follows (add black box to make it easier to see the parent object size)
A. Open New Scene Unity Menu Item → File → New Scene
B. Add a Canvas Unity Menu Item → GameObject → UI → Canvas
C.Canvas add empty objects as Layout Controllers
D. Add Horizontal Layout Group Component Unity Menu Item → Component → Layout → Horizontal Layout Group to parent object
The Horizontal Layout Group allocates the size of the child object based on the Layout Element of the child object, as shown below. Add black frame)
E. Add a Button under the parent object and add a Layout Element Component to cover the default size. Adjust the Minimum Width to 100 components: Unity Menu Item → Component → Layout → Layout Element
F. Add Content Size Fitter Component, Horizontal Fit to Min Size Adjust the parent object size with the child Minimum Width.
G. If the Button copy increases, the size of the parent object itself changes, as shown below
H. Adjust the parent object’s pivot to control the scaling direction, as shown below
I. Using the above example, we first use a Horizontal Layout Group to row the child objects and add Layout elements to the child objects to cover the preset size. Finally, set the Size of the parent object by obtaining the Layout Element of the child object through the Content Size Fitter. At this point, the parent object Size will be scaled according to the child object Size
###Aspect Ratio Fitter
Controls the size of the Layout Controllers themselves, adjusting the size according to the aspect ratio of the object, and changing the zoom direction through pivot
Component location: Unity Menu Item → Component → Layout → Aspect Ratio Fitter
Aspect Mode: Adjustment Mode
None: does not adjust
Width Controls Height:
Based on Width, change Height according to proportion
When Width changes, Height changes proportionally
Height Controls the Width:
Change Width to scale based on Height
When Height changes, Width changes proportionally
Fit In Parent: Anchors automatically adjust the width, height, position, and size of this image to Fit perfectly In the Parent object. This mode may not cover all space
Adjust the scale (easy to see the black background of the parent object)
Adjust the size of the parent object, the object will be proportionally aligned with the parent object
Envelope Parent: Anchors are automatically adjusted to scale the width, height, position, and size of this image completely over the Parent object. This pattern may run out of space
Adjust the scale (easy to see the black box added to the parent object)
Resize the parent object. The object will scale over the parent object
Aspect Ratio: the Ratio is width/height
The difference:
Content Size Fitter is automatically resized through child objects
Aspect Ratio Fitter is adjusted by numerical value (Aspect Ratio)
The Postscript Auto Layout System can arrange multiple UIs quickly and conveniently, and automatically adjust the contents when the size changes. It can also be used with multiple Layers of Software, which is very convenient and intuitive for future adjustment and modification. It is one of the necessary functions in the UI System.
■ Unity — Manual-Auto Layout
Docs.unity3d.com/Manual/UIAu…
■ Unity – Manual- Auto Layout_UI Reference
Docs.unity3d.com/Manual/comp…
Unity: UGUI automatically resizes and positions various screens
UGUI adaptive
I published two previous articles on GUI auto resize: Unity Auto resize and Position of GUI and Unity: “Since the release of Unity’s new GUI system (UGUI) in version 4.6, creating UI in Unity has become much easier and less dependent on third-party UI tools. There have also been some innovations in the event system, with UI events and components becoming more visual and less aligned to each other, making them more flexible to use. While the UGUI has solved many of the problems of previous UI creation, there are still some areas that need to be adjusted by the user during actual development.
Since The launch of Unity 4.6, uGUi-related game objects don’t use a Transform like normal game objects, but a Rect Transform with many high, wide, and Pivot columns. These columns are a great help in the visual design of the UI, and many Anchors allow the whole UI to have more powerful and flexible control over the different proportions of the artwork.
If you use Free Asspect in Unity’s Game View to view the image and arbitrarily pull the edge of the window to adjust the image scale, you can see that the UGUI itself does not move the position or zoom. If you change 320×480 to 480×320, You have the possibility of your UI being cut off from the edge of the frame, and you need to be Anchors that work very well, allowing elements such as the UI image or buttons to change their width and height with the scale of the frame, but the disadvantage is that the text does not change size. Of course, if you have “Best Fit” checked, UI elements, such as buttons and images, also change their width to height ratio, but the text only changes Font Size. So, when the image scale changes, the image is deformed, the text is not deformed. So the distance between the text and other UI button or edge and button text gap is likely to happen in the change of the strange, these are what we don’t want to see, so, the best situation should be, when the rate of image has changed, the UI images, text, such as the button text can maintain the original aspect ratio automatic scaling the size and position.
Normal vertical picture
The picture was cut out when it went landscape
Set Anchors for the UI
When it’s landscape, the UI automatically deforms and adjusts, but the text is different
When the first UGUI game object is created in the scene, the game object of Canvas must be automatically created first. In this object, the three components of Canvas, Canvas Scaler and Graphic Raycaster are preset. The Canvas Scaler is used to control the overall scaling of THE UGUI, so the basic purpose of adjusting THE UI according to the original scale can be achieved by following the following steps:
- 1. Leave all UI Anchors at the default value (0.5).
- 2. Set the Canvas Scaler Ui Scale Mode field to Scale With Screen Size.
- 3. In the Reference Resolution field, enter the width (X) and height (Y) of the basic Resolution.
- 4. Select Match Width Or Height in Screen Match Mode.
- 5. If the screen is horizontal, enter 0 in the Match field; if the screen is vertical, enter 1 in the Match field.
Leave Anchors at 0.5
Sets the related fields of the Canvas Scaler
This way, when the screen changes from portrait to landscape, the GUI automatically maintains its original scale and adjusts its size and position.
Turn to landscape, the scale and position of the UI remain unchanged
However, this is ideal only when the proportion is the same and the proportion is directly horizontal to vertical. When encountering a screen with different proportions, for example, the UI layout originally designed for iPhone 4 in 640×960 meets the screen of iPhone 5 in 640×1136, the left and right sides will be cut off.
When the proportions are different, the edges are cut off
At this time, we must change the Match field of Canvas Scaler Componet of Canvas game object to 0, so that its UI can be scaled to the screen in accordance with the original scale.
Adjust the Match of the Canvas Scaler to bring the cut part back into the screen.
We can adjust the Canvas Scaler Component’s Match field automatically for the Component that contains the following code in order to adjust the Canvas Scaler Component’s Match field for any unknown screen proportions we may encounter (especially on Android and desktop platforms).
void Awake(){
CanvasScaler canvasScaler = GetComponent<CanvasScaler>();
float screenWidthScale = Screen.width / canvasScaler.referenceResolution.x;
float screenHeightScale = Screen.height / canvasScaler.referenceResolution.y;
canvasScaler.matchWidthOrHeight = screenWidthScale > screenHeightScale ? 1 : 0;
}
Copy the code
This will be a lot easier in the future. If there are no special needs, you almost never need to touch the UI’s Anchors, most of which are simply adjusting the size and position of the UI, and keeping the original layout automatically adjusted for all kinds of image proportions. If we need to make dynamic scaling and displacement of UI to make the picture more vivid, we will mainly change the value of Local space, then we do not need to consider the scaling part, Canvas Scaler will help us deal with it.
However, there is an exception that cannot be solved by this method at present, that is, when the UI Text uses the Rich Text Size tag to specify the Text Size, the Text Size will not be affected by the Canvas Scaler, which specifies the absolute Text Size, which should be paid special attention to.
By the way, in addition to UI layout and changes, sometimes you want to do particle effects on the UI, or have non-GUI game objects interact with the UI on the screen. Just change the Render Mode field of Canvas Component of Canvas game object to Screen Space-camera and set the contents of Render Camera and Plane Distance. Objects placed between the UI plane and its designated Camera can be displayed on the UI and interact with them. If they are Sprite game objects, the Sorting Layer and GUI can be adjusted. However, the size and position of these non-GUI game objects may be different from what is expected when the GUI screen is automatically adjusted. To avoid this, refer to the previous article “Unity: Automatically adjust the zoom and position for various screen proportions “(this article is Unity4.3 version, so we didn’t get it for everyone) to adjust the Camera.
Change Canvas Render Mode to Screen Space-camera
PS: The current Unity version is 5.0.1 F1.
Unity: Create UI flow management mechanism for UGUI
Why is a UI process management mechanism needed
Since the release of Unity’s new GUI system in 4.6, Unity finally has a relatively complete visual editing UI tool available, so we can easily and intuitively add buttons to the screen, A few actions, such as drag and drop down menus, can be used to set which Component on which GameObject UI events should perform functions, so it’s easy to use the UI to trigger our own programs, but the entire game can have quite a few screens. Different UI button or behavior will shift to different picture, also need to open different UI view, if there is no plan of the UI images move line rules, in a complicated picture conversion between, will be difficult to maintain, may occur even in the change in the future many times revised from unnecessary bugs and make the UI images move line becomes quite a mess.
No matter how beautiful and gorgeous the UI is, how well the feedback, typesetting, effect and other fine work is done, if the overall structure and moving line process goes wrong, the hard work of doing so much fine work will be wasted, because the UI moving line will let the player get lost, it will be uncomfortable to use; Even if the use of UI moving line design is no problem, but there is no consistent planning process management, in the production and maintenance is also easy to cause some problems.
For example, most UI screens have A “back” button that goes back to the previous screen, and when the UI screen comes in from screen A, it should go back to screen A, and when the UI screen comes in from screen B, it should go back to screen B, just like we would with A web browser. Regardless of where in the current page, press the “back” is back in the current web page before the web page, the “back” button on the UI is the same function, just, the return is a button, it will be implemented we set a Component of a function, through the function to make it open a UI screen, Therefore, we are very intuitive will be A bit like writing dead program, A screen into B screen, so click the back button of B screen is open A screen, if C screen can also enter B screen? Then do A temporary records to judge from A or C B picture to enter, or entering B images at the same time, notify the B picture from where come of, at this point, the first question, when many A will enter into the picture B, B picture back button function is modified to achieve the right judgment; The second question assumes that both A and B can enter C, C can enter A and D, and A and D can enter B… And so on more in-depth cross moving line, add a UI screen or change to a UI screen process, almost all the screen return key to modify, this project is too large, affecting the maintenance of the whole body is the most likely to produce unknown bugs.
For this reason, the UI process management mechanism designed should meet at least two conditions:
- The player will not get lost when using it.
- No matter how many screens are changed in the future, there is no need to modify or maintain the back button.
The most straightforward way to satisfy these two conditions is the previously mentioned “back page” function of the Web browser. How does this function work? In fact, the concept is browsing history, that is to say, we only need to record the process of the UI screen that the player has switched during operation in order, and then return in order. In this way, the player will not get lost when using, and we do not need to customize the return button for each screen during production and maintenance.
Now that the concept of the process is clear, there are some other things to note:
- The entry and exit dynamics between each UI screen,
- UI events cannot function while the screen is in progress.
- Exit UI graphics should not be continuously implemented even if they are not visible on the game screen.
So much has been written before, but the most important thing is how to implement it. The following is a video of the whole implementation process, which is not explained in detail. Therefore, the following article provides further detailed explanation:
The basic structure of the UI canvas
Decide what scale and resolution we want to use to make the UI, usually the same as what the Game View is set to preview
The main Canvas of UI is established, and UI Scale Mode and Reference Resolution are set in its Canvas Scaler. This is mainly to enable basic adaptive adjustment of UI when facing devices with different picture proportions in the future
Next, each UI screen should create a separate Canvas, and place other UI elements in the screen under this Canvas, so that the Canvas Scaler set in the front main Canvas will also directly apply to each UI screen under it. In addition, another advantage of doing this is that, Then you can directly change the Canvas Sort Order to arrange the UI screen before and after the Order. At the same time, in addition to UI elements, each screen also needs to use Image to create a “transparent occlusion layer” covering the whole screen size, and place it on top of other UI elements. Normally, GameObject is closed, and it is turned on only when the UI screen enters and exits dynamically. This protects the UI Button from being clicked.
animation
The simplest way is to fade in, fade out or zoom in and out. Of course, you can also design richer animation effects. The most important thing is to Open the “transparent shielding layer” made in front of the screen during the playing of the screen.
Enable “Transparent Occlusion” during animation playback
Set animation controls
After we have created the entry and exit animation files for the UI screen, open the Animator View and you can see that Unity has automatically created two animation states in the Animator with the same name as the animation file.
By default, Unity automatically puts the AnimationClip in the Animator state
Next, connect the Transition from the Open state to the Closed state and set the Transition conditions in the Inspector View, since Unity defaults to transitions between two animation states that require a mixed process (for example, This mixing process will sacrifice the playback time of the animation itself. This mixing process is not required for the animation of UI screen conversion, so we can directly set the Transition Duration to 0. Make it explicitly play the Closed animation; Since the Open state does not need to be switched to Closed immediately after the playback ends, it does need to wait for the Animator to trigger the Out parameter before switching to the Closed state, uncheck Has Exit Time. And set a transition condition Out in Conditions.
Set the related fields in Transition
In Inspector View, set Transition Duration to 0 and Exit Time to 1. Theoretically, when Closed plays, the UI screen will complete the exit action, and it does not need to be ignored until it enters the game next time. However, in order to prevent the UI screen that is not in the game screen from continuing to execute and wasting efficiency resources, we hope that after closing, It is possible to turn these UI scenes off for the entire GameObject, so we will later write the Animator’s state-specific StateMachineBehaviour script to do this. OnStateExit for StateMachineBehaviour is not called until “after” the state ends, so if Transition is not connected to Closed, The OnStateExit of StateMachineBehaviour will not be executed. In addition, Exit Time is 0 to 1 to represent the entire animation Time from the beginning to the end. We hope that Closed can clearly indicate that the animation ends at the end Time. That’s why the Exit Time is set to 1.
Set the correct time value
Since animation files created in Unity are assumed to be looped repeatedly, we need to manually locate Open and Closed animation files and uncheck Loop Time in the Inspector View.Cancel the Loop Time
### Animation status Script, the most important part is to make the UI screen exit, but also its GameObject can be closed at the same time, in the past, we put our own Script on the GameObject, and when making the animation file, Add events to the Animation View. Since The release of Unity 5.0, there is no need to bother. We can directly select any state in the Animator View. In Inspector View, click Add Behaviour button to create a special Script for Animator state. Then you can write this Script to start, start, and end the state in the Animator. So, to close the GameObject after the UI screen exits, just set up StateMachineBehaviour for Closed and add a short line of code to OnStateExit.
Add Behaviour to Animator State
using UnityEngine;
public class UIStateClosed : StateMachineBehaviour {
override public void OnStateExit(Animator animator, AnimatorStateInfo stateInfo, int layerIndex) {
animator.gameObject.SetActive(false); }}Copy the code
UI process management scripts
Now, what’s left is to rely on this Script to do the main management, and what does this Script do? We list them as follows:
- Create a list to record the UI screen process, used to enter the UI screen in order to record, in order to facilitate the return.
- The first UI screen opened is recorded in the process. When the process only has one stroke left, it cannot be returned.
- Cannot enter the same screen as the current UI screen.
- The target screen to enter or return to must be moved to the top.
- The forward entry of the target screen must be recorded in the course.
- When you return, you must remove the current screen from your progress record.
Also, the previous article “Unity: This Script uses the name specified in the Parameters of the Animator to notify the Animator to go to the Closed state. So you also need to declare a public field to set the name.
With the above working requirements, we can write the following code:
using UnityEngine;
using System.Collections.Generic;
public class UIManager : MonoBehaviour {
public GameObject startScreen;
public string outTrigger;
private List<GameObject> screenHistory;
void Awake(){
this.screenHistory = new List<GameObject>{this.startScreen};
}
public void ToScreen(GameObject target){
GameObject current = this.screenHistory[this.screenHistory.Count - 1];
if(target == null || target == current) return;
this.PlayScreen(current , target , false , this.screenHistory.Count);
this.screenHistory.Add(target);
}
public void GoBack(){
if(this.screenHistory.Count > 1) {int currentIndex = this.screenHistory.Count - 1;
this.PlayScreen(this.screenHistory[currentIndex] , this.screenHistory[currentIndex - 1].true , currentIndex - 2);
this.screenHistory.RemoveAt(currentIndex); }}private void PlayScreen(GameObject current , GameObject target , bool isBack , int order){
current.GetComponent<Animator>().SetTrigger(this.outTrigger);
if(isBack){
current.GetComponent<Canvas>().sortingOrder = order;
}else{
current.GetComponent<Canvas>().sortingOrder = order - 1;
target.GetComponent<Canvas>().sortingOrder = order;
}
target.SetActive(true); }}Copy the code
To paraphrase
After the above work and program writing, just create an empty GameObject, attach the UIManager script, and put the first UI Screen GameObject in the Start Screen field. Fill Out Trigger with the same name as the Trigger parameter set by the Animator.
Set the Start Screen and Out Trigger fields
Then, the On Click setting in the Button of each UI has this GameObject of the UIManager script attached. If the Button is going to the next screen, Select ToScreen from the drop-down menu and set the GameObject of the target UI screen in its parameter field. If the button returns to the previous screen, select GoBack from the drop-down menu.
Go to the next UI screen and select ToScreen
Set up the GameObject for the next UI screen
GoBack to the previous UI screen and select GoBack
Next, execute, can record every process enters the picture, and at every time of return in order to return to the previous screen, each will show UI images will also be in the top of the game pictures, and not obscured to, at the same time, in the picture of the inlet and exit dynamic period will not be accidentally press the button and jump to unexpected picture; Because, use a line is defined as where to go in from where returns, so users in numerous image browsing, will be more intuitive operation and not easy to get lost, at the same time, because the picture history record, shall be treated through the GoBack to the back button, whether how much new UI images in the future, or UI image process how to change, The return function does not need any modification, but can also return to the correct screen.
This simple UI flow management mechanism is now complete. If we want to change the UI conversion logic (for example, the sorting rules of the target screen), we only need to change the PlayScreen in UIManager. If you want to change to another entry/exit animation, you only need to replace the motion of each state in the Animator. If multiple versions of UIManager exist in the same project or scene, you can also directly change the Button’s On Click Settings in the UI. The whole mechanism is quite flexible, but the player will not be confused in operation, and the subsequent maintenance will be easier.
Case project: pan.baidu.com/s/1sk9bnGh Password: do7y
PS: The current Unity version is 5.1.0 F3
Unity: Use UnityEngine.Events to make your program more flexible and stable
Since the release of the new GUI system in Unity 4.6, we can discover new event fields from the GUI Control we create, for example, create a Button, You can specify which Component of the GameObject to perform when the button is clicked from the On Click bar in the Inspector window, making button events more visual and flexible to edit. Other GUI controls have a similar field for this setting. This field is generated by the UnityEvent type under UnityEngine.Events. Our Component can also provide this field. Allows visual editing, and makes programs more flexible.
In this video, two examples are used to demonstrate the practice, which can be better understood through visual manipulation and audio explanation
First, a brief explanation of the basics of Unity writing is that when we create our own Script in the Project window and attach that Script to this GameObject, it becomes the Component of that GameObject. In general, fields declared using the public modifier and whose type is serializable will be generated in the Inspector window with a value that can be set. This is because Unity automatically serializes public fields.
The public variable field appears in the Inspector window for editing.
However, it is not appropriate to use the public modifier if the field declared in Class is not intended for external access. Therefore, when using other modifiers such as private or protected, if, You want the field variable to be editable in the Inspector window as well. You can insert the SerializeField Attribute on its top line, and Unity knows that the field is serialized to be editable in the Inspector window.
The private variable field with SerializeField also appears in the Inspector window.
If you want to create an event bar like the UGUI Button, the basic thing is to add a using UnityEngine.Events at the top of your application file, and then simply declare the bar using UnityEvent. There’s always an event field like a Button.
A variable field of type UnityEvent. An event field appears in the Inspector window.
The UnityEvent column can be edited flexibly in the editor. When the target Component contains properties or methods that use public declarations, If the value is of type bool, int, float, or string, you can select and supply the value directly here. You can set multiple targets of different types and call the program later when it executes. If Method does not declare a parameter, You can also select Settings here, if you declare more than two parameters, the menu on this side will not appear.
Although the UnityEvent column is quite convenient and flexible, the value passed in is set in the editor, and only one value can be set. If our Component program wanted to supply parameter values directly to the target, it would not be able to do so. In addition to UnityEvent, unityEngine. Events provides four additional generic classes for our extended use.
As you can see in the two examples in the movie, the PassEvents Script is used to write new event types that extend the UnityEvent generic class. See the official unityEngine. Events file for information, such as UnityEvent. UnityEvent<T0,T1> UnityEvent<T0,T1,T2> UnityEvent<T0,T1,T2> UnityEvent<T0,T1,T2> That is, we can declare our own required parameter type UnityEvent by inheriting from these generic classes, and can hold anywhere from one to four parameters. Such as:
[System.Serializable]
public class PassString : UnityEvent<string> {} [System.Serializable]
public class PassColor : UnityEvent<Color> {}Copy the code
Because declared classes that want to be used to declare field variables can be shown in the Inspector window, in addition to being marked on the field with SerializeField, the type must also be serializable. Also add System.Serializable to class.
At this point, we can declare the event field using our own defined UnityEvent and make it visible in the Inspector window.
###1. In the first example, a Simple calculator is made to demonstrate the use and benefits of unityEngine. Events. Make anything before, must first understand to this thing what are the purpose and function, so, we use simple UI from a first operation, this operation provides two input fields, a word, a sign of operation shows that the calculation results of words, and four operational function button.
Here, we first define the basic functions of the calculator:
- Click the operation button to change the text of the operation symbol according to the type of operation.
- After clicking the operation function button, the text of the calculation result is displayed in the UI.
The functionality is fairly simple, so we can just create a C# Script in the Project window and call it MyComputer.
In this case, UnityEvent is used to pass data. Normally, UnityEvent cannot be used to call the program directly. So we also need to declare a UnityEvent that can take a string argument using the same method described earlier, because it only takes one line to declare the derived class, and we may declare several unityevents with different arguments in the future, so, We also need to create a C# Script in the Project window called PassEvents to write these classes, which, if sorted, can be used directly in other projects.
Using UnityEngine.Events: PassEvents Script: Using UnityEngine.Events: PassEvents Script: Using UnityEngine.Events: PassEvents Script: Using UnityEngine. Only passing strings will be used in this demonstration, so just declare a Class that can pass strings.
using UnityEngine.Events;
[System.Serializable]
public class PassString : UnityEvent<string> {}Copy the code
Don’t forget System.Serializable, otherwise you won’t see the field in the Inspector window.
In MyComputer Script, our simple calculator is mainly to calculate the result of two values, with the USE of UI, the received value would have been a string, but we have to calculate as a value in the program, so first declare two internal access only fields to temporarily store the value of the incoming. The source of this value is passed by the UGUI InputField’s Edit End event. Of course, in MyComputer Script, we don’t use that much, just provide two properties so that external strings can be passed in. Whoever wants to pass it in, just provide an entry and convert the incoming string to a value so that other computing functions can do the work.
Because the main work of MyComputer Script is to receive the calculation data and calculate the result to pass out, so we need to declare each calculation result of the event, to express the occurrence of things, as for the final result of who, in fact, does not need to MyComputer to care.
At present, we define MyComputer Script to only do the four calculation functions of addition, subtraction, multiplication and division. Therefore, we directly calculate the result of the obtained value, and convert the result into a string, Invoke to call and execute individual corresponding events, then the basic function of calculator is completed.
private float _value1;
private float _value2;
[SerializeField]
private PassString onAdd;
[SerializeField]
private PassString onSubtract;
[SerializeField]
private PassString onMultiply;
[SerializeField]
private PassString onDivide;
public string value1{
set{
float.TryParse(value , out this._value1); }}public string value2{
set{
float.TryParse(value , out this._value2); }}public void Add(){
this.onAdd.Invoke((this._value1 + this._value2).ToString());
}
public void Subtract(){
this.onSubtract.Invoke((this._value1 - this._value2).ToString());
}
public void Multiply(){
this.onMultiply.Invoke((this._value1 * this._value2).ToString());
}
public void Divide(){
if(this._value2 == 0) return;
this.onDivide.Invoke((this._value1 / this._value2).ToString());
}
Copy the code
Divide(); Divide(); Divide(); Divide(); Divide(); Divide(); Divide();
After writing the contents of MyComputer Script, in Unity, create an empty object and give it MyComputer as its Component. At this point, you can clearly see the add, subtract, multiply, and divide event fields from the Inspector window.
MyConputer has event fields of addition, subtraction, multiplication, and division.
The next step is to set up the relationship between the UI and MyComputer. First, you need to make the values entered by the UI pass to MyComputer, so, You set the input string to be passed to value1 and Value2 properties of MyComputer from the End Edit events in the two Inputfields.
Value1 The string entered in the Field is passed to Value1 of MyComputer
The Value2 Field input string is passed to Value2 of MyComputer
You can see here that the End Edit is actually the same thing as the PassString we declared. Once set, there is no need to set what string to pass in the Inspector window because, This End Edit event is called when the text in the InputField is finished, press Enter or click somewhere outside the UI text field so that it cannot be called when the text is finished, but is called when the text is finished. So, once this is set up, as long as the UI text field is finished typing, it will be passed to MyComputer, and MyComputer will, as written in our program, convert the received string to a value and store it temporarily.
Next, select the buttons representing the functions of addition, subtraction, multiplication and division respectively On the UI, and set them in the On Click event bar to perform the corresponding calculation function in MyComputer.
Button + On Click: MyComputer Add()
Subtract() for Button-click event
Button X’s On Click event executes MyComputer Multiply()
Button/On Click event executes MyComputer Divide()
The On Click event, which triggers execution when a button is clicked, is familiar. Correctly, it should be executed after clicking the button and releasing the button within the same button.
At this point, the UI provides input data to MyComputer, and requires MyComputer to perform the functions it needs to perform. Then, we set up MyComputer’s addition, subtraction, multiplication, and division events so that the UI can display the results.
If you recall the main function of the calculator we wrote earlier, the first step is to make the calculation symbol change, so every calculation event has to be set to change the calculation symbol text on the UI when the event occurs. In this case, although the calculation event of MyComputer is supposed to pass the result from the program, But the UnityEvent field itself passes static arguments directly, so our code doesn’t have to write about changing symbols for those calculations. We just set them directly on the Inspector window.
The second step is to display the calculated results on the UI. Since the MyComputer code directly passes the calculated results to individual corresponding events, the target for each event is set to the Text of the results displayed on the UI.
Each calculation function event lets the UI change the notation and display the calculation results
This way, whenever MyComputer performs a calculation, it passes the string to the UI for display.
A picture of the results
Now that the calculator is fully functional, we may want to make some more subtle adjustments, such as turning off the function button for calculations that have already been performed without changing the input value. In the Button of UGUI, there is an Interactable column, which is used to manage the interaction between the user and the Button. When it is turned off, the Button will lose its function and show a dark color. Therefore, we can make a function here that when the calculation result comes out, Turn off the Interactable on the corresponding function button, and this function does not need to modify the code at all, just add the execution target for each calculation function’s event field again.
Close the button after calculating the result.
All the calculated buttons are off.
Although the button can be turned off after the calculation, there should be a chance to recalculate if the input field is changed, so there needs to be a function to turn the button back on. In this case, We could start Interactable for each button in the End Edit event of the two InputField inputs, but if the calculator has many function buttons and many input fields, it would be too fragmented to set them up one by one.
So that we can imagine to MyComputer besides is responsible for the calculation, also provides a reset function, the function of the reset itself does not perform anything, just call the execution state reset event, so set on the target function in this event, as long as the function is called to perform the reset, they will be implemented, so, We will add the following code to MyComputer Script:
[SerializeField]
private UnityEvent onResetStatus;
public void ResetStatus(){
this.onResetStatus.Invoke();
}
Copy the code
In the Unity editor, we were able to Intractable MyComputer’s On Reset Status bar and set it to unlock the four function buttons.
Enable the button again when the status is reset.
Thus, the two InputField End Edit events specify that the state reset function of MyComputer is performed.
The End Edit event specifies that ResetStatus() of MyComputer be executed.
Reset events in the film, though, is for the button to enable, but to the side can change state at any reset event to perform the action, for example, the calculation result text into a question mark, so, when every time after the input field to be input, not only was discontinued button will restart, also can make the text into a question mark, Wait until the function button is pressed to display the correct results.
Now that we have the function of state reset, can we disable only the current computed button and enable the other buttons? This eliminates the need to re-enter data in the input field to enable the button. Before doing this, it should be explained that each event field can specify multiple execution targets, and these execution targets actually have an execution order, which will execute the first one after the second one, and then one after the second one.
Therefore, we do not need to modify the code to make this function, but can directly change the content and order of the events of each calculation function, so that each calculation is performed after the state reset, before the calculation result is displayed, the current calculation button is disabled, and the calculation symbol is changed.
Change to perform state resets before performing other actions.
At this point, the simple calculator functions will come to an end, and we can see that MyComputer Script code is quite independent, providing only an entry point for incoming data to be temporarily stored, and providing computations for external execution and throwing the results of execution into events. Who will be the incoming data, who will perform the function of computing, who will respond to the event, all don’t need to go to tube, as a result, if all the incident field is not set goals, when asked to perform calculations, it can also perform calculations, as usual without error by not set goals, and what is target case should be who, basically, You don’t need to know about MyComputer either. Everything is in the Inspector window and can be set up or changed as needed.
The same goes for state resets. MyComputer Script provides state resets and performs state resets, regardless of who requires them or what they do.
So, can let programmers become quite simple, and practical use on the elasticity change is very big, in addition, because of some change in the demand function does not need to rely on modify the code to achieve, in addition to saving program compile time, also can avoid the human error because of the modified code errors, the most important of all, Because MyComputer Script does not need to know who is going to perform its function or what function its events will eventually perform, it minimizes the coupling between programs. If you move a Script that does this to another project, There is also no special concern about what other programs will be associated with the occurrence of many defects.
Here is the full contents of myComputer.cs:
using UnityEngine;
using UnityEngine.Events;
public class MyComputer : MonoBehaviour {
private float _value1;
private float _value2;
[SerializeField]
private PassString onAdd;
[SerializeField]
private PassString onSubtract;
[SerializeField]
private PassString onMultiply;
[SerializeField]
private PassString onDivide;
[SerializeField]
private UnityEvent onResetStatus;
public string value1{
set{
float.TryParse(value , out this._value1); }}public string value2{
set{
float.TryParse(value , out this._value2); }}public void Add(){
this.onAdd.Invoke((this._value1 + this._value2).ToString());
}
public void Subtract(){
this.onSubtract.Invoke((this._value1 - this._value2).ToString());
}
public void Multiply(){
this.onMultiply.Invoke((this._value1 * this._value2).ToString());
}
public void Divide(){
if(this._value2 == 0) return;
this.onDivide.Invoke((this._value1 / this._value2).ToString());
}
public void ResetStatus(){
this.onResetStatus.Invoke(); }}Copy the code
In the second example, five spheres are designed to use the same Componet individually, but because of the actual use of different Settings, directly reflect different behavior, and demonstrate how to let UnityEvent in addition to transferring parameters, but also to bring back data.
First, create five spheres in the scene. These are just Unity’s default native objects. Unity assigns each Sphere Collider and a preset Material to each of them. The Material Shader is Unity 5’s Standand Shader, and we don’t need to make any changes to that part.
Native object sphere Componet.
Here, the functional requirements of several spheres are defined:
- The sphere can be triggered by a click.
- The sphere can bounce up.
- Spheres can change color.
Based on these requirements, create their individual C# scripts in the Project window and name them SphereTouch, SphereJump, and SphereDiscolor.
Let’s start by writing the code for SphereTouch. SphereTouch only provides an event response when the user clicks on the sphere through the mouse. So, SphereTouch will have a UnityEvent event bar and implement OnMouseDown built into Unity. As long as the GameObject has a Collider Componet, pressing a button on the GameObject will trigger OnMouseDown to execute its contents, which will simply call the UnityEvent event bar. What kind of behavior you want to react to when you click on it is someone else’s business.
using UnityEngine;
using UnityEngine.Events;
public class SphereTouch : MonoBehaviour{[SerializeField]
private UnityEvent onTouch;
public void DoTouch(){
this.onTouch.Invoke();
}
void OnMouseDown(){
this.DoTouch(); }}Copy the code
This code might surprise you, but why not Invoke OnMouseDown directly? In essence, this is simply to have SphereTouch provide one more external function so that other objects can communicate clicks. For example, if object A is clicked by A mouse, then its On Touch event field is set to perform the DoTouch function of object B, C, or D, then an object can be clicked. Four objects react at the same time.
There are many ways to make a SphereJump, such as using Unity’s animation system, or to give an object a push up and let it fall due to gravity. However, in order to simplify the operation, we use the code to make it jump by moving its position. Beating with the movement, after all is from the original position to the position of a specified level, then move back to the original position, as for, how high, how fast speed, not sure in advance, so, you first need to declare two numerical fields can be set in the Inspector window, let we can adjust the target height and beat in the editor.
[SerializeField]
private float hight = 1;
[SerializeField]
private float speed = 5;
Copy the code
In addition, in the conduct of the beating, if immediately and asked to beat, half will jump and jump, such behavior is problematic, so a state record must be set, when action in progress, do not accept again beat, wait until after the action again to accept and perform the required actions.
private enum Status{
None,
Moving
}
private Status _status = Status.None;
Copy the code
Because the jumping behavior we want to do here is actually the combination of two movement behaviors, we need to write a function that can provide the object itself to move from the starting point to the end point. The movement between two points can be achieved directly through the vector3. Lerp built in Unity.
private IEnumerator Move(Vector3 source , Vector3 target){
float t = 0;
while(t < 1){
transform.position = Vector3.Lerp(source , target , t);
t += Time.deltaTime * this.speed;
yield return null;
}
transform.position = target;
}
Copy the code
Vector3. The t of Lerp is a value between 0 and 1, which can be regarded as the progress position from the starting point to the end point. 0 is the starting point and 1 is the end point. Instead of having t increase by a fixed value over Time, we should multiply our expected increase (i.e. speed) by time.deltatime. Here we use yield return NULL to make the while loop execute once every frame. When t exceeds 1, it indicates that the end has been reached and the loop can be ended.
The final step to complete the move is Vector3. When the Lerp is executed for the last time, it cannot be just t = 1. Therefore, the position must be corrected to the correct destination position at last, so that the move can be truly completed.
Note that the Method returns an IEnumerator, which is used as a Coroutine, so you can use yield inside it to control processes and times. To call the Method execution, use StartCoroutine.
Jumping movement, the next need to do is to beat in the beginning, the status to the move, then obtain the location of the starting point and end point, first perform start moving to the end, after execution, to perform the finish moving to the behavior of the starting point, wait for after the completion of action beating will be over, so, you can then change back to None state.
private IEnumerator DoJump(){
this._status = Status.Moving;
Vector3 source = transform.position;
Vector3 target = source;
target.y += this.hight;
yield return StartCoroutine(this.Move(source , target));
yield return StartCoroutine(this.Move(target , source));
this._status = Status.None;
}
Copy the code
Now that the beating behavior has been written, it is necessary to provide a function for external call, when the external call is executed, first judge whether there is a beating, if there is no beating.
public void Jump(){
if(this._status == Status.None) StartCoroutine(this.DoJump());
}
Copy the code
Thus, even when SphereJump is complete, it does not use the UnityEvent event. It is only responsible for executing the action that is called. However, it can also provide additional basic events, such as start, middle, and end of a jump, as required. However, it is not used in this demonstration, so it is omitted here.
Here is the contents of SphereJump.cs:
using UnityEngine;
using System.Collections;
public class SphereJump : MonoBehaviour {
private enum Status{
None,
Moving
}
[SerializeField]
private float hight = 1;
[SerializeField]
private float speed = 5;
private Status _status = Status.None;
public void Jump(){
if(this._status == Status.None) StartCoroutine(this.DoJump());
}
private IEnumerator Move(Vector3 source , Vector3 target){
float t = 0;
while(t < 1){
transform.position = Vector3.Lerp(source , target , t);
t += Time.deltaTime * this.speed;
yield return null;
}
transform.position = target;
}
private IEnumerator DoJump(){
this._status = Status.Moving;
Vector3 source = transform.position;
Vector3 target = source;
target.y += this.hight;
yield return StartCoroutine(this.Move(source , target));
yield return StartCoroutine(this.Move(target , source));
this._status = Status.None; }}Copy the code
Before writing SphereDiscolor, we need to go back to the PassEvents Script file and declare a UnityEvent that can be used to pass color parameters.
[System.Serializable]
public class PassColor : UnityEvent<Color> {}Copy the code
In SphereDiscolor, two variable fields are declared to temporarily store Material and Material’s color. Then, a field is also declared to set the preset color of the sphere. In Awake, the Material of the sphere itself is temporarily saved. And change the color of the sphere using the preset color.
private Material _material;
private Color _color;
[SerializeField]
private Color color = Color.white;
void Awake(){
this._material = GetComponent<Renderer>().material;
this.DefaultColor();
}
public void DefaultColor(){
this._material.color = this.color;
this._color = this.color;
}
Copy the code
Also, the ability to change to a preset color is a separate function that provides external call execution.
Then, the ability to change the color of the sphere, we declared the ability to change the color directly to a specified color and the ability to change the color randomly, are both available for external call execution.
Here we also declare a UnityEvent event bar that can pass the color value. When the color is changed, the event is executed and the changed color is passed. So, when the color of the sphere is changed, you can have it trigger other behavior, or even provide a color to affect the raised behavior.
[SerializeField]
private PassColor onChangeColor;
void Awake(){
this._material = GetComponent<Renderer>().material;
this.DefaultColor();
}
public void DefaultColor(){
this._material.color = this.color;
this._color = this.color;
}
public void Discolor(Color color){
this._material.color = color;
this._color = color;
this.onChangeColor.Invoke(color);
}
public void RandomColor(){
this.Discolor(new Color(Random.value , Random.value , Random.value));
}
Copy the code
At this point, the programming part of this example is over, and back to the Unity screen, add these three scripts as components for each sphere.
One might wonder at this point, why split it into three scripts instead of the same Script? Since the functions of Unity’s game objects are component-oriented, which components can have the same functions, and which Component is removed, the game objects do not have the same functions, so we separate the functions into separate scripts. Each Script just provides its own function, not the other functions, so the sphere can be clicked when it has the SphereTouch Component, it can’t be clicked if it doesn’t, and the sphere can jump when it has SphereJump, thus, We can explicitly extract game objects and change their abilities.
So, when each sphere has the color change function, the color change function has a preset color field that changes the sphere to its specified initial color when Play Mode starts.
Then, when each sphere has the trigger function, we can specify for it what behavior to trigger when the sphere is triggered. As shown in the movie, we can specify for each sphere when it is triggered, to ask its next ball to jump, and to ask its previous ball to randomly change color.
When the second ball setting is clicked, the third ball jumps and the first ball randomly changes color.
When the last ball is clicked, it returns the first four balls to their original color.
When the fifth ball is clicked, the first four balls change to their original color.
Then, we can also set the second ball to affect the color of the other balls when its color is changed.
When the second ball changes color, change the first and fifth balls to random colors.
Here, again, we experience the benefits and flexibility of simply writing a Script to provide functionality that can be changed freely in the editor without specifying its behavior in the code.
While UnityEvent can either specify a fixed value to pass to a Component function directly in the Inspector window and execute it, or Invoke code can be used to call and pass parameters in. It doesn’t return data the way we normally call Method, which is a bit of a fly in the ointment.
UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent: UnityEvent That is, we can instantiate a reference object and pass in the parameters to UnityEvent. When the data contained in the object changes in the function performed by UnityEvent, the party calling the UnityEvent event can also get the changed data from the original object.
However, it seems too cumbersome to declare many different classes in order to bring back different types of data, so it is best to make a generic Class that is used as a holder object for passing data. So, a C# Script called PassHolder can be created to do just that.
public class PassHolder {
public object value{set; private get; }public T GetValue<T> (){
if(this.value= =null) return default(T);
return (T)this.value; }}Copy the code
Since all types inherit directly or indirectly from Object (not Unity’s Object), declaring a Property accepts objects of any type, and then retrieving the stored data using the generic declaration method. It is simply a matter of checking whether there is any data and returning the expected type default if there is no data. In this way, objects of this class can universally access data of any type.
With classes like this, you can experiment with SphereDiscolor. Start by adding the ability to swap colors for SphereDiscolor. Of course, before declaring the UnityEvent field that can pass the PassHolder, go back to the PassEvents Script file and add the type that declares the passable color and the PassHolder parameters.
[System.Serializable]
public class PassColorReturn : UnityEvent<Color , PassHolder> {}Copy the code
What this function does is that when it is called, it sends out its own color by exchanging the color event, and changes its own color by bringing back the color from the PassHolder.
In our current example, the color that can be changed is SphereDiscolor, so adding another color change function to it is the ability to receive PassHolder objects in addition to the target color, so in addition to taking the received color and changing it for yourself, The original color is also written into the PassHolder, so that the party calling the color change function receives the color value to be brought back.
[SerializeField]
private PassColorReturn onSwapColor;
public void SwapColor(){
PassHolder holder = new PassHolder();
this.onSwapColor.Invoke(this._color , holder);
this.Discolor(holder.GetValue<Color>());
}
public void Discolor(Color color , PassHolder holder){
holder.value = this._color;
this.Discolor(color);
}
Copy the code
Once done, in the Inspector window of the Unity editor, you can specify which ball to Swap colors with directly in the On Swap Color event field.
When the fourth ball is clicked, make the fifth ball jump, the third ball change color, ask yourself to perform the color swap and specify the color swap with the first ball.
In this way, programs, short pieces of code, that define their own function and when to call the event that should be executed, without specifying what type or function they affect, can be found in the Inspector window, It is flexible to configure completely different behavior for gameObjects with the same Component. Also won’t let the program because the type is wrong or the number of parameters do not conform to the error, making the program execution and design become more flexible, more stable, but also let the code content become more concise, more clear logic. Maintenance and process adjustment will also become more visual and clearer.
In the past, I have published the article and video “Unity: Using UGUI ScrollRect to make virtual joystick”, in which the Events for virtual joystick transfer operation behavior is the application of UnityEngine.Events, using these methods, the program written will greatly improve the reuse and scalability.
Ok, this is the end of the description and demonstration of UnityEngine.Events. If you like this article or the video shown, please help to introduce it to your friends, and don’t forget to subscribe to the video channel and click like on the fan page.
UnityEvent
Docs.unity3d.com/ScriptRefer…
Current version Unity 5.2.1 F1