Shenzhen Kai Mo Rui Electronic Technology Co. LTDShenzhen Kai Mo Rui Electronic Technology Co. LTD

News

The difference between HTTP protocol/RTSP protocol/RTMP protocol

Source:Shenzhen Kai Mo Rui Electronic Technology Co. LTD2020-06-20

  Common points and differences between RTSP, RTMP and HTTP
  Common points:
  1: RTSP RTMP HTTP is in the application layer.
  2: In theory, both RTSP and RTMPHTTP can be used for live broadcasting and on-demand broadcasting, but generally RTSP RTMP for live broadcasting and HTTP for on-demand broadcasting. When doing video conferences, SIP protocol was originally used, but now it is basically replaced by RTMP protocol.
  the difference:
  1: HTTP: Hypertext Transfer Protocol (ftp is File Transfer Protocol).
  HTTP: (Real Time Streaming Protocol), a real-time streaming protocol.
  HTTP stands for Routing Table Maintenance Protocol (routing table maintenance protocol).
  2: HTTP treats all data as files. The http protocol is not a streaming media protocol.
  RTMP and RTSP protocols are streaming media protocols.
  3: The RTMP protocol is a proprietary protocol of Adobe, which is not fully disclosed. The RTSP protocol and the HTTP protocol are shared protocols and are maintained by a special organization.
  4: The RTMP protocol generally transmits flv, f4v format streams, and the RTSP protocol generally transmits ts, mp4 format streams. HTTP has no specific stream.
  5: RTSP transmission generally requires 2-3 channels, and the command and data channels are separated. HTTP and RTMP generally transmit commands and data on one TCP channel.
  RTSP, RTCP, RTP difference
  1: RTSP real-time streaming protocol
  As an application layer protocol, RTSP provides an extensible framework, and its significance is to make it possible to control and on-demand real-time streaming media data. In general, RTSP is a streaming media representation protocol, mainly used to control the data transmission with real-time characteristics, but it does not transmit data itself, but must rely on some services provided by the underlying transmission protocol. RTSP can provide operations such as play, pause, fast forward, etc. for streaming media. It is responsible for defining specific control messages, operation methods, status codes, etc., and also describes the interaction with RTP (RFC2326).
  2: RTCP control protocol
  The RTCP control protocol needs to be used together with the RTP data protocol. When the application starts an RTP session, it will occupy two ports at the same time, which are used by RTP and RTCP respectively. RTP itself cannot provide a reliable guarantee for the orderly transmission of data packets, nor does it provide flow control and congestion control. These are all done by RTCP. Usually RTCP uses the same distribution mechanism as RTP to periodically send control information to all members in the session. The application receives these data and obtains relevant information about the session participants, as well as feedback information such as network status and packet loss probability. , Which can control the quality of service or diagnose the network status.
  The function of the RTCP protocol is realized through different RTCP datagrams, mainly in the following types:
  SR: The sender report. The so-called sender refers to the application or terminal that sends out RTP datagrams. The sender can also be the receiver. (SERVER will be sent to CLIENT at a fixed time).
  RR: Receiver report. The so-called receiver refers to an application or terminal that only receives but does not send RTP datagrams. (The SERVER receives the response sent by the CLIENT).
  SDES: Source description. The main function is to serve as a carrier of identification information about session members, such as user names, email addresses, and phone numbers. In addition, it also has the function of conveying session control information to session members.
  BYE: Notify to leave. The main function is to indicate that one or several sources are no longer valid, that is, to notify other members of the session that they will exit the session.
  APP: It is defined by the application itself, which solves the scalability problem of RTCP and provides great flexibility for the implementer of the protocol.
  3: RTP data protocol
  The RTP data protocol is responsible for encapsulating streaming media data and realizing real-time transmission of media streams. Each RTP datagram consists of two parts: Header and Payload. The first 12 bytes of the header mean Fixed, and the load can be audio or video data.
  The place where RTP is used is PLAY. The server transmits data to the client using the UDP protocol. RTP adds a 12-byte header (description information) before the transmitted data.
  RTP payload encapsulation design The network transmission in this article is based on the IP protocol, so the maximum transmission unit (MTU) is 1500 bytes. When using the IP/UDP/RTP protocol hierarchy, this includes at least 20 bytes of IP header , 8-byte UDP header, and 12-byte RTP header. In this way, the header information should occupy at least 40 bytes, so the maximum size of the RTP payload is 1460 bytes. Taking H264 as an example, if a frame of data is greater than 1460, it needs to be fragmented and packaged, and then unpacked at the receiving end, combined into a frame of data, and decoded and played.
  In live broadcast applications, RTMP and HLS can basically cover all clients to watch,
  HLS is mainly due to relatively large delay, and the main advantage of RTMP is low delay.
  1. Application scenarios
  Low-latency application scenarios include:
  . Interactive live broadcast: such as beauty anchors, game live broadcasts, etc.
  Various anchors and streaming media are distributed to users for viewing. Users can text chat and interact with the host.
  . Video conferencing: If we are on a business trip, we use video conferencing to hold internal meetings.
  In fact, it doesn’t matter if the meeting is delayed by 1 second, because after others have finished speaking, others need to think.
  The delay of thinking will also be around 1 second. Of course, it won’t work if you use video conferencing to fight.
  . Others: monitoring, live broadcast and some places require delays,
  The delay of the RTMP protocol on the Internet can basically meet the requirements.
  Two, RTMP and delay
  1. The features of RTMP are as follows:
  1) Adobe supports it very well:
  RTMP is actually the current industry standard protocol for encoder output. Basically all encoders (cameras and the like) support RTMP output.
  The reason is that the PC market is huge. PCs are mainly Windows, and Windows browsers basically support flash.
  Flash supports RTMP very well.
  2) Suitable for long time playback:
  Because RTMP support is very complete, it can play RTMP stream continuously for a long time.
  At that time, the test was 1 million seconds, which means it can be played continuously for more than 10 days.
  For commercial streaming media applications, the stability of the client is of course also necessary, otherwise, how can the end user play if it can’t be seen?
  I knew that there was an educated client who initially used a player to play http streams and needed to play different files. As a result, problems always occurred.
  If you change to the server side to convert different files into RTMP streams, the client can always play;
  After the client took the RTMP solution, it was distributed by CDN, but there was no problem with the client.
  3) Low latency:
  Compared with the UDP private protocol of YY, RTMP is considered to have a large delay (the delay is 1-3 seconds),
  RTMP is considered low latency compared to the latency of HTTP streaming (usually more than 10 seconds).
  For general live broadcast applications, as long as it is not a requirement for telephone conversations, RTMP delay is acceptable.
  In general video conferencing applications, RTMP delay is also acceptable, because we usually listen when others are talking.
  In fact, the 1 second delay doesn't matter, we have to think about it (some people's CPU processing speed is not so fast).
  4) There is cumulative delay:
  The technology must be aware of the weaknesses. One weakness of RTMP is the cumulative error, because RTMP is based on TCP and does not lose packets.
  Therefore, when the network status is poor, the server will cache the packets, causing accumulated delays;
  When the network condition is good, send them to the client together.
  The countermeasure for this is to disconnect and reconnect when the client's buffer is too large.
  2. HLS low latency
  Mainly people always ask this question, how to reduce HLS delay.
  HLS solves the delay, like climbing to a maple tree to catch fish. The strange thing is that there are people shouting, look at that, there are fish.
  What are you saying?
  All I can say is that you are participating in Brother Qian's magic show.
  If you are really sure that there is, please use the actual measurement picture to show it, refer to the delay measurement below.
  3. RTMP delay measurement
  How to measure delay is a difficult problem.
  However, there is an effective method, which is to use the stopwatch of the mobile phone to compare the delay more accurately.
  After measurement, it is found that when the network is in good condition:
  The RTMP delay can be about 0.8 seconds.
  . Multi-level edge nodes will not affect the delay (the edge server of a CDN with the same SRS can do it)
  . Nginx-Rtmp delay is a bit large, it is estimated that it is caused by cache processing and multi-process communication?
  . GOP is a hard indicator, but SRS can close the GOP cache to avoid this impact.
  . Server performance is too low, it will also cause the delay to increase, the server can not send data.
  The length of the client's buffer also affects the delay.
  For example, if the NetStream.bufferTime of the flash client is set to 10 seconds, the delay will be at least 10 seconds.
  4. GOP-Cache
  What is GOP? It is the time distance between two I frames in the video stream.
  What is the impact of GOP?
  Flash (decoder) can only start decoding and playing after getting GOP.
  In other words, the server generally first gives an I frame to Flash.
  Unfortunately, the problem is here. Assuming GOP is 10 seconds, that is, there are key frames every 10 seconds.
  What if the user starts playing at the 5th second?
  The first scheme: waiting for the next I frame,
  In other words, wait another 5 seconds before starting to send data to the client.
  In this way, the delay is very low, and the stream is always real-time.
  The problem is: after waiting for 5 seconds, the screen will be black. The phenomenon is that the player is stuck there and there is nothing.
  Some users may think they are dead and refresh the page.
  In short, some customers think that waiting for a key frame is an unforgivable error. What does the delay have to do?
  I just want to start and play the video quickly, it's best to open it and play it!
  The second option: start to release immediately,
  What to put?
  You must know, put the previous I frame.
  In other words, the server needs to always cache a gop,
  In this way, the client will start playing from the previous I frame when it comes up, and it can start quickly.
  The problem is: the delay is naturally large.
  Is there a good plan?
  Have! There are at least two types:
  The encoder lowers the GOP, such as a GOP in 0.5 seconds, so that the delay is also very low, and there is no need to wait.
  The disadvantage is that the compression rate of the encoder is reduced, and the image quality is not so good.
  5. Cumulative delay
  In addition to GOP-Cache, there is another relationship, which is cumulative delay.
  The server can configure the length of the live queue, the server will put the data in the live queue,
  If it exceeds this length, it will be cleared to the last I frame:
  Of course this cannot be configured too small,
  For example, GOP is 1 second, and queue_length is 1 second. This will cause the data to be cleared in 1 second, which will cause a jump.
  Is there a better way? some.
  The delay is basically equal to the buffer length of the client, because the delay is mostly due to low network bandwidth.
  The server caches and sends it to the client together. The phenomenon is that the buffer of the client becomes larger.
  For example, NetStream.BufferLength=5 seconds, then there are at least 5 seconds of data in the buffer.
  The best way to deal with the accumulated delay is for the client to detect that there is a lot of data in the buffer and reconnect to the server if possible.
  Of course, if the network has been bad, there is no way.
  CAMERA 4K modular camera-CM-8420-SHE 20x optical zoom, 16x digital zoom 8.29 million progressive scan 1/1.8” CMOS maximum resolution up to 3840X2160, face detection,
  Area intrusion detection, cross-border detection, entering area detection, leaving area detection, wandering detection, people gathering detection, item left detection, item picking detection, audio anomaly detection, motion detection
  For more information about Shenzhen Camorui electronic technology products, please visit our product website: http://www.cmr-cctv.com/

Related News

Professional Engineer

24-hour online serviceSubmit requirements and quickly customize solutions for you

+8613798538021