Following on from the last lecture, this lesson describes how the software manager operates on the heap.
The malloc function requests the required byte space from the heap area each time and records the size of the requested space. The recorded space size is used for subsequent free function release.
void* arr = malloc(40 * sizeof(int));
Copy the code
Intuitively, this code returns 160 bytes of space, but the actual space used is more than 160 bytes, because certain information, such as the current space size, is recorded during the space allocation process. As shown in the figure below:
If you execute the following code:
int *arr = malloc(40 * sizeof(int));
free(arr + 60);
Copy the code
This can cause problems because free generally doesn’t do error checking for you. By default, it reads the space size forward from where the current pointer is pointing.
Different mallocs use different heuristic strategies to allocate memory
For example, organize the memory in the heap area as a power of two
Let’s say we need 7 bytes, and we allocate 8 bytes in 232^323. You may get more space than you need each time, but it will be easier and faster.
The organization of memory in the heap area
Free space in the heap area is organized by a linked list, with each free volume archiving a pointer to the next area. The memory manager usually iterates to find free space. There are usually first match rules, best match rules, next match rules.
For the free area, we just need to add it to the linked list, as shown in the figure below
Now let’s look at the other case, if the memory situation is like the figure below, when we need 100 bytes of memory space, the current heap memory area is not enough for us. Although the current free area has a total size of 160 bytes, the entire free block is scattered and none of it is suitable for our needs.
So a natural thought would be something like this:
We cluster the memory blocks that have been used together and place them at the beginning of the heap area so that there is enough space for us to use.
But this leads to a problem. The pointer held by the user now points to the same memory block as before. In the video, the teacher says that none of the current mainstream systems do it this way, only the old MAC systems did it. Let’s see how they do it.
First, the pointer that the user holds is actually pointing to a handle structure that contains a lot of information, including the first address of the allocated block of memory in the heap region. In this case, if the heap memory is compressed, only the handle structure will be affected, not the Pointers held by the user. This creates a new problem, however, because early MAC systems used a low-priority thread in the background to handle heap area compression, and if compression was forced, the priority of the thread would be increased. User address fetching and heap compression cannot occur at the same time.
To manipulate the heap memory, we need to do the following
void **handle = NewHandle(40); HandleLock(handle); . HandleUnlock(handle);Copy the code
Next, let’s look at the stack area
The stack area
The space used by the stack area is directly proportional to the number of active functions.
void A(a) {
int a;
short b[2];
double c;
B();
C();
}
void B(a) {
int d;
}
void C(a) {
double e;
}
Copy the code
As we execute the code shown above, the stack area allocates space for variables in the function, and the stack pointer moves along.
In addition, the stack pointer can only access variables in the current function area.
Assembly parts
The code we write is first compiled into assembly code, which is then translated into machine instructions, and the code snippet stores the machine instructions.
Before we begin our introduction to assembly language, let’s take a look at how instructions are executed.
Any computer operation is performed by the arithmetic logic unit (ALU). The data in the ALU comes from the registers, and the data in the registers comes from the memory. Registers act as a cache between the ALU and memory. In this course, we assume that the registers are 16 general purpose registers (R1, R2…). .
For example
But let’s do j += 1; When the operation is performed, first load the data in J into R1, then load 1 into R2, ALU performs the calculation and puts the result into R3, which then stores the result into memory j.
I’m going to stop there, and I’ll talk about that next time.