(For example, we have made great efforts to improve the performance of "http / 1 / 2" and "http / 2" compared with "http / 1" and "http / 2".Http / 2 "line header blocking"Wait, you may ask: Why did it solve "basically" rather than "completely"?This is because although http / 2 uses "frame", "stream" and "multiplexing" without "queue header blocking", these methods occur at the application layer, at the bottom, that is, in the TCP protocol or "thread header blocking".What happened here?Let's take a closer look from the perspective of protocol stack. In http / 2, multiple "request responses" are decomposed into streams. After they are handed over to TCP, TCP will be split into smaller packets and sent in turn (in fact, it should be called segment in TCP, that is, "split").When the network is good, the package can be delivered to the destination quickly. However, if the network quality is poor, such as surfing the Internet with mobile phones, packets may be lost. In order to ensure reliable transmission, TCP has a special "packet loss retransmission" mechanism. Lost packets must wait for retransmission confirmation. Even if other packets are received, they can only be placed in the buffer. No, onlyYou can "rush".)