The concept of queues

Before we do that, let’s review the basic concept of queues:

Queue: A linear First In First Out (FIFO) list that allows insertion (joining) at one end and deletion (leaving) at the other.

Characteristics of queues

Similar to ticket queuing window, the first to see the first to buy the ticket, and then go first, later people can only buy the ticket after

There are two common forms of queues

The average queue

In a computer, every piece of information is stored in a memory cell. The little squares in the figure above are, figuratively speaking, memory cells. You can think of them as common arrays that hold our information.

When there is a lot of data, we can’t store all the data, so when the computer processes the data, it has to deal with the data first, and when it’s finished, it will release the data and process the next one. Then, the memory of the data that has been processed is wasted. Because the later data can only queue back, if you want to move the rest of the data forward once, then the efficiency will be low, certainly not realistic, so, the ring queue appeared.

The circular queue

Its queue is a ring, it avoids the disadvantage of a normal queue, which is a little hard to understand, but it is actually a queue, the same queue header, queue tail, the same first in first out (FIFO). We sort the queue clockwise.

  • Queue Head: The end that allows deletion is called the queue Head.
  • Tail: The end of the queue that allows insertion is called the Tail.

Ring queue implementation: in the computer, there is no ring memory, but we will be the order of memory processing, so that a section of memory to form a ring, so that they are connected head to end, in simple terms, this is actually an array, but there are two Pointers, one points to queue head, one points to queue tail. A pointer to the queue Head (Head) is the buffer’s readable data, and a pointer to the queue Tail (Tail) is the buffer’s writable data. By moving these two Pointers (Head) &(Tail), we can read and write the data in the buffer until the buffer is full. You can store new data again.

When data is stored, the data is stored in the address space of ‘0’, the queue tail points to the next place where data can be stored ‘1’, and when data is coming, the data is stored to address ‘1’, and the queue tail points to address ‘2’. When data is processed, it must first process the data in the ‘0’ space, that is, the queue header data. After processing the data, the ‘0’ address space is released, and the queue header points to the next address that can process the data. Thus realize the whole ring buffer data read and write.

If you look at the picture, the queue header points to data that has already been stored and is waiting to be processed. The next number processed by the CPU is 1; The end of the queue points to an address where data can be written. When the 1 gets processed, it gets rid of the 1. And point the queue head to 2. When a 6 is written, the pointer at the end of the queue points to the next addressable address.

Implementation from queue to serial buffer

Serial ring buffer sending and receiving: In many introductory tutorials, serial port sending and receiving is known as receiving a piece of data, triggering an interrupt, and sending the data back. There is no buffer, and when the volume is too large, or when the data is received too quickly to process the received data, when the data is received again, the unprocessed data will be overwritten. Then there will be packet loss, which is a fatal wound to our program.

So how can we avoid this situation? Obviously, some of the queue features mentioned above are easy to implement. The received data will be cached, so that the speed of processing has a little buffer, so that the speed of processing can catch up with the speed of receiving, and the above has analyzed the advantages and disadvantages of the ordinary queue and the ring queue, so we must be using the ring queue to achieve. Here is the code implementation:

Define a structure:

typedef struct { u16 Head; u16 Tail; u16 Lenght; u8 Ring_Buff[RINGBUFF_LEN]; }RingBuff_t; RingBuff_t ringBuff; // Create a ringBuff bufferCopy the code

Initialize the

Initialize structure-related information so that our ring buffer is end to end and contains no data, i.e., empty queues.

/** * @brief RingBuff_Init * @param void * @return void * @author Jie Jie * @date 2018 * @version v1.0 * @note Initializes the ring buffer */ RingBuff_Init(void) {// Initializes the information ringbuff.head = 0; ringBuff.Tail = 0; ringBuff.Lenght = 0; }Copy the code

The initialization effect is as follows:

Write the code to the ring buffer:

/** * @brief Write_RingBuff * @param u8 data * @return FLASE: Write to ring buffer is full; */ u8 Write_RingBuff(u8 data) {*/ u8 Write_RingBuff(u8 data) { If (ringbuff. Lenght >= RINGBUFF_LEN) // Check whether the buffer is full {return FLASE; } ringBuff.Ring_Buff[ringBuff.Tail]=data; // ringBuff.Tail++; ringBuff.Tail = (ringBuff.Tail+1)%RINGBUFF_LEN; // prevent unauthorized access to ringBuff.Lenght++; return TRUE; }Copy the code

Read buffer data code implementation:

/** * @brief Read_RingBuff * @param u8 *rData, used to save read data * @return FLASE: there is no data in the ring buffer, read failed; */ u8 Read_RingBuff(u8 *rData) {*/ u8 Read_RingBuff(u8 *rData) { If (ringBuff.Lenght == 0)// Return FLASE; } *rData = ringBuff.Ring_Buff[ringBuff.Head]; // ringBuff.Head++; // ringBuff. ringBuff.Head = (ringBuff.Head+1)%RINGBUFF_LEN; // Prevent unauthorized access to ringBuff.Lenght--; return TRUE; }Copy the code

There are two things to note about read and write operations:

  1. Check whether the queue is empty or full, if empty, data is not allowed to read, and return FLASE. If the value is full, data cannot be written to avoid overwriting existing data. If the processing speed can’t keep up with the receiving speed, you can increase the buffer size appropriately, trading space for time.
  2. To prevent illegal access to Pointers, procedures have instructions, the need for users to grasp the size of the entire buffer.

So in the serial receiver function:

void USART1_IRQHandler(void) { if(USART_GetITStatus(USART1, USART_IT_RXNE) ! = RESET) // Receive interrupt {USART_ClearITPendingBit(USART1,USART_IT_RXNE); // Clear flag bit Write_RingBuff(USART_ReceiveData(USART1)); // Read received data}}Copy the code

The test results

No packet loss occurs in the test data

supplement

For the present stage, JIE Jie I write code also slowly learn to standardize. All code snippets are readable and portable. I used the macro definition to decide whether to turn on the ring buffer to send and receive data. Porting to your code has no other side effects, just turn on the macro definition to use it.

#define USER_RINGBUFF 1 // Use ring buffer to receive data #if USER_RINGBUFF /** If use ring buffer to receive serial data ***/ #define RINGBUFF_LEN 200 // Define the maximum number of bytes to receive 200 #define FLASE 1 #define TRUE 0 void RingBuff_Init(void); u8 Write_RingBuff(u8 data); u8 Read_RingBuff(u8 *rData); #endifCopy the code

Of course, we can use idle interrupt and DMA transmission, higher efficiency, but some microcontrollers do not have idle interrupt and DMA, then this ring buffer is very important, and easy to transplant.

Note: Part of the article is from the course of Professor James_Yuan

Like to pay attention to me!

Relevant codes can be obtained in the background of the public account.

Welcome to the “Internet of Things Development” public account