Sponsored Links
-->

Rabu, 22 November 2017

LAS16-409: Time Sensitive Networking kernel modifications for ...
src: i.ytimg.com

Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed at November 2012 by renaming the existing Audio / Video Bridging Task Group and continuing its work. The name changed as a result of extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over Ethernet networks.

The majority of projects define extensions to the IEEE 802.1Q - Virtual LANs. These extensions in particular address the transmission of very low transmission latency and high availability. Possible applications include converged networks with real time Audio/Video Streaming and real-time control streams which are used in automotive or industrial control facilities.

Work is also currently being carried out in AVnu Alliance's specially created Industrial group to define Compliance & Interoperability requirements for TSN networked elements. For finding out more information about this initiative and about TSN standards in general, interested parties are invited to join the Industrial Advisory Council by contacting the AVnu administration.


Video Time-Sensitive Networking



Key components

The different TSN standards documents that are specified by IEEE 802.1 can be grouped into three basic key component categories that are required for a complete real-time communication solution. Each and every standard specification can be used on its own and is mostly self-sufficient. However, only when used together in a concerted way, TSN as a communication system can achieve its full potential. The three basic components are:

  1. Time synchronization: All devices that are participating in real-time communication need to have a common understanding of time
  2. Scheduling and traffic shaping: All devices that are participating in real-time communication adhere to the same rules in processing and forwarding communication packets
  3. Selection of communication paths, path reservations and fault-tolerance: All devices that are participating in real-time communication adhere to the same rules in selecting communication paths and in reserving bandwidth and time slots, possibly utilizing more than one simultaneous path to achieve fault-tolerance

Time Synchronization

The name "Time-sensitive networking" is already quite descriptive in this regard: In contrast to standard Ethernet according to IEEE 802.3 and Ethernet bridging according to IEEE 802.1Q, time plays an important role in TSN networks. For real-time communication with hard, non-negotiable time boundaries for end-to-end transmission latencies, all devices in this network need to have a common time reference and therefore, need to synchronize their clocks among each other. This is not only true for the end devices of a communication stream, such as an industrial controller and a manufacturing robot, but also true for network components, such as Ethernet switches. Only through synchronized clocks, it is possible for all network devices to operate in unison and execute the required operation at exactly the required point in time.

Time synchronization in TSN networks can be achieved with different technologies. Theoretically, it is possible to outfit every end device and network switch with a GPS clock. However, this is costly and there is no guarantee that the Radio or GPS clock has access to the radio or satellite signal at all times - for example if the network is installed in a moving car, on a factory floor or in a tunnel deep beneath the surface of the earth. Due to these constraints, time in TSN networks is usually distributed from one central time source directly through the network itself. In most cases, this is done using the IEEE 1588 Precision Time Protocol, which utilizes Ethernet frames to distribute time synchronization information. In addition to the universally applicable IEEE 1588 specification, the Time-Sensitive Task Group of the IEEE 802.1 committee has specified a profile of IEEE 1588, called IEEE 802.1AS-2011. The idea behind this profile is to narrow the huge list of different IEEE 1588 options down to a manageable few critical options that are applicable to Home networks or networks in automotive car or industrial automation environments.

Scheduling and traffic shaping

Scheduling and traffic shaping allows for the coexistence of different traffic classes with different priorities on the same network - each with different requirements to available bandwidth and end-to-end latency. Standard bridging according to IEEE 802.1q uses eight distinct priorities with a strict priority scheme. On the protocol level, these priorities are visible in the 802.1Q VLAN Tag of a standard Ethernet frame. These priorities already allow to distinguish between more important and less important network traffic, but even with the highest of the eight priorities, no absolute guarantee for an end-to-end delivery time can be given. The reason for this are buffering effects inside the Ethernet switches. If a switch has started the transmission of an Ethernet frame on one of its ports, even the highest priority frame has to wait inside the switch buffer for this transmission to finish. With standard Ethernet switching, this non-determinism cannot be avoided. This is not an issue in environments where applications do not depend on the timely delivery of single Ethernet frames - such as office IT infrastructures. In these environments, file transfers, emails or other business applications have limited time sensitivity themselves and are usually protected by other mechanisms further up the protocol stack, such as the Transmission Control Protocol. In industrial automation and automotive car environments, however, where closed loop control or safety applications are using the Ethernet network, reliable and timely delivery is of utmost importance. For Ethernet to be used here, the strict priority scheduling of IEEE 802.1Q needs to be enhanced.

Different time slices for different traffic classes - the IEEE 802.1Qbv time-aware scheduler

TSN enhances standard Ethernet communication by adding mechanisms to ensure timely delivery with soft and hard real-time requirements. The mechanism of utilizing the eight distinct VLAN priorities is retained, to ensure complete backwards compatibility to non-TSN Ethernet. This has always been one of the design principles of the IEEE 802 group when developing Ethernet further - maintain backwards compatibility to maintain interoperability with the existing infrastructure and to allow a seamless migration towards new technologies.

With TSN, for each of the eight priorities, a user can select from different mechanisms how Ethernet frames are processed and priorities can be individually assigned to already existing methods (such as the IEEE 802.1Q strict priority scheduler) or new processing methods, such as the TSN IEEE 802.1Qbv time-aware traffic scheduler.

A typical use case for TSN is the communication of a Programmable Logic Controller (PLC) with an industrial robot through an Ethernet network. To achieve transmission times with guaranteed end-to-end latency that can support the closed loop control that is operating between the PLC and the robot, one or several of the eight Ethernet priorities can be assigend to the IEEE 802.1Qbv time-aware scheduler. This scheduler is designed to separate the communication on the Ethernet network into fixed length, repeating time cycles. Within these cycles, different time slices can be configured that can be assigned to one or several of the eight Ethernet priorities. By doing this, it is possible to grant exclusive use - for a limited time - to the Ethernet transmission medium for those traffic classes that need transmission guarantees and can't be interrupted. The basic concept is a time-division multiple access (TDMA) scheme. By establishing virtual communication channels for specific time periods, time-critical communication can be separated from non-critical background traffic. By granting exclusive access to the transmission medium and devices to time-critical traffic classes, the buffering effects in the Ethernet switch transmission buffers can be avoided and time-critical traffic can be transmitted without non-deterministic interruptions. One example for an IEEE 802.1Qbv scheduler configuration is visible in figure 1:

In this example, each cycle consists of two time slices. Time slice 1 only allows the transmission of traffic tagged with VLAN priority 3, and time slice 2 in each cycle allows for the rest of the priorities to be sent. Since the IEEE 802.1Qbv scheduler requires all clocks on all network devices (Ethernet switches and end devices) to be synchronized and the identical schedule to be configured, all devices understand which priority can be sent to the network at any given point in time. Since time slice 2 has more than one priority assigned to it, within this time slice, the priorities are handled according to standard IEEE 802.1Q strict priority scheduling.

This separation of Ethernet transmissions into cycles and time slices can be enhanced further by the inclusion of other scheduling or traffic shaping algorithms, such as the Audio- and Video Bridging traffic shaper IEEE 802.1Qav. IEEE 802.1Qav supports soft real-time In this particular example, IEEE 802.1Qav could be assigned to one or two of the priorities that are used in time slice two to distinguish further between audio/video traffic and background file transfers. The IEEE 802.1 Time-Sensitive Networking Task Group specifies a number of different schedulers and traffic shapers that can be combined to achieve the nonreactive coexistence of hard real-time, soft real-time and background traffic on the same Ethernet infrastructure.

IEEE 802.1Qbv in more detail: Time slices and guard bands

When an Ethernet interface has started the transmission of a frame to the transmission medium, this transmission has to be completely finished before another transmission can take place. This includes the transmission of the CRC32 checksum at the end of the frame to ensure a reliable, fault-free transmission. This inherent property of Ethernet networks - again- poses a challenge to the TDMA approach of the IEEE 802.1Qbv scheduler. This is visible in figure 2:

Just before the end of time slice 2 in cycle n, a new frame transmission is started. Unfortunately, this frame is too large to fit into its time slice. Since the transmission of this frame cannot be interrupted, the frame infringes the following time slice 1 of the next cycle n+1. By partially or completely blocking a time-critical time slice, real-time frames can be delayed up to the point where they cannot meet the application requirements any longer. This is very similar to the actual buffering effects that happen in non-TSN Ethernet switches, so TSN has to specify a mechanism to prevent this from happening.

The IEEE 802.1Qbv time-aware scheduler has to ensure that the Ethernet interface is not busy with the transmission of a frame when the scheduler changes from one time slice into the next. The time-aware scheduler achieves this by putting a guard band in front of every time slice that carries time-critical traffic. During this guard band time, no new Ethernet frame transmission may be started, only already ongoing transmissions may be finished. The duration of this guard band has to be as long as it takes the maximum frame size to be safely transmitted. For an Ethernet frame according to IEEE 802.3 with a single IEEE 802.1Q VLAN tag and including interframe spacing, the total length is: 1518 byte (frame) + 4 byte (VLAN Tag) + 12 byte (Interframe spacing) = 1534 byte.

The total time needed for sending this frame is dependant on the link speed of the Ethernet network. With Fast Ethernet and 100 Mbit/s transmission rate, the transmission duration is as follows:

t m a x f r a m e = 1534   b y t e 12.5 ? 10 6   b y t e ? 1 s = 122.72 ? 10 - 6 s {\displaystyle t_{maxframe}={\frac {1534\ byte}{12.5\cdot 10^{6}\ byte\cdot {\frac {1}{s}}}}=122.72\cdot 10^{-6}s}

In this case, the guard band has to be at least 122.72µs long. With the guard band, the total bandwidth / time that is usable within a time slice is reduced by the length of the guard band. This is visible in figure 3:

Note: Due to facilitate the presentation of the topic, the actual size of the guard band in figure 3 is not to scale, but is significantly smaller than indicated by the frame in figure 2.

In this example, the time slice 1 always contains high priority data (e.g. for motion control), while time slice 2 always contains best effort data.Therefore, a guard band needs to be placed at every transition point into time slice 1 to protect the time slice of the critical data stream(s).

While the guard bands manage to protect the time slices with high priority, critical traffic, they also have some significant drawbacks:

  • The time that is consumed by a guard band is lost - it cannot be used to transmit any data, as the Ethernet port needs to be silent. Therefore, the lost time directly translates in lost bandwidth for background traffic on that particular Ethernet link.
  • A single time slice can never be configured smaller than the size of the guard band. Especially with lower speed Ethernet connections and growing guard band size, this has a negative impact on the lowest achievable time slice length and cycle time.

To partially mitigate the loss of bandwidth through the guard band, the standard IEEE 802.1Qbv includes a length-aware scheduling mechanism. This mechanism is used when store-and-forward switching is utilized: after the full reception of an Ethernet frame that needs to be transmitted on a port where the guard band is in effect, the scheduler checks the overall length of the frame. If the frame can fit completely inside the guard band, without any infringement of the following high priority slice, the scheduler can send this frame, despite an active guard band, and reduce the waste of bandwidth. This mechanism, however, cannot be used when cut-through switching is enabled, since the total length of the Ethernet frame needs to be known a priori. Therefore, when cut-through switching is used to minimize end-to-end latency, the waste of bandwidth will still occur. Also, this does not help with the minimum achievable cycle time. Therefore, length-aware scheduling is an improvement, but cannot mitigate all drawbacks that are introduced by the guard band.

Frame pre-emption and minimizing the guard band

To further mitigate the negative effects from the guard bands, the IEEE working groups 802.1 and 802.3 have specified the frame pre-emption technology. The two working groups collaborated in this endeavour, since the technology required both changes in the Ethernet Media Access Control (MAC) scheme that is under the control of IEEE 802.3, as well as changes in the management mechanisms that are under the control of IEEE 802.1. Due to this fact, frame pre-emption is described in two different standards documents: IEEE 802.1Qbu for the bridge management component and IEEE 802.3br for the Ethernet MAC component.

Figure 4 gives a basic example how frame pre-emption works. During the process of sending a best effort Ethernet frame, the MAC interrupts the frame transmission just before the start of the guard band. The partial frame is completed with a CRC and will be stored in the next switch to wait for the second part of the frame to arrive. After the high priority traffic in time slice 1 has passed and the cycle switches back to time slice 2, the interrupted frame transmission is resumed. Frame pre-emption always operates on a pure link-by-link basis and only fragments from one Ethernet switch to the next Ethernet switch, where the frame is reassembled. In contrast to fragmentation with the Internet Protocol (IP), no end-to-end fragmentation is supported.

Each partial frame is completed by a CRC32 for error detection. In contrast to the regular Ethernet CRC32, the last 16 bit are inverted to make a partial frame distinguishable from a regular Ethernet frame. In addition, also the start of frame delimiter (SFD) is changed.

The support for frame pre-emption has to be activated on each link between devices individually. To signal the capability for frame pre-emption on a link, an Ethernet switch announces this capability through the LLDP (Link Layer Discovery Protocol). When a device receives such an LLDP announcement on a network port and supports frame pre-emption itself, it may activate the capability. There is no direct negotiation and activation of the capability on adjacent devices. Any device that receives the LLDP pre-emption announcement assumes that on the other end of the link, a device is present that can understand the changes in the frame format (changed CRC32 and SDF).

Frame pre-emption allows for a significant reduction of the guard band. The length of the guard band is now dependant on the precision of the frame pre-emption mechanism: how small is the minimum size of the frame that the mechanism can still pre-empt. IEEE 802.3br specifies the best accuracy for this mechanism at 64 byte - due to the fact that this is the minimum size of a still valid Ethernet frame. In this case, the guard band can be reduced to a total of 127 byte: 64 byte (minimum frame) + 63 byte (remaining length that cannot be pre-empted). All larger frames can be pre-empted again and therefore, there is no need to protect against this size with a guard band.

This minimizes the best effort bandwidth that is lost and also allows for much shorter cycle times at slower Ethernet speeds, such as 100 Mbit/s and below. Since the pre-emption takes place in hardware in the MAC, as the frame passes through, cut-through switching can be supported as well, since the overall frame size is not needed a priori. The MAC interface just checks in regular 64 byte intervals whether the frame needs to be pre-empted or not.

The combination of time synchronization, the IEEE 802.1Qbv scheduler and frame pre-emption already constitutes an effective set of standards that can be utilized to guarantee the coexistence of different traffic categories on a network while also providing end-to-end latency guarantees. This will be enhanced further as new IEEE 802.1 specifications, such as 802.1Qch are finalized.

Selection of communication paths, reservation and fault-tolerance

The TSN technology, especially the time-aware scheduler according to IEEE 802.1Qbv, have been developed for use in mission-critical network environments. In these networks, not only timing guarantees are relevant, but fault-tolerance as well. Networks that support applications such as safety-relevant control loops or autonomous driving in vehicles have to be protected against faults in hardware or network media. The TSN task group is currently specifying the fault-tolerance protocol IEEE 802.1CB for this purpose. In addition to this protocol, existing high-availability protocols such as HSR or PRP that are specified in IEC 62439-3, can be utilized.

To register fault-tolerant communication streams across a network, Path control and reservation as specified in IEEE 802.1Qca, manual configuration or vendor-specific solutions can be used.

In the currently ongoing project IEEE 802.1Qcc, the TSN task group focuses on the definition of management interfaces and protocols to enable TSN network administration on large scale networks. Three different aspects are discussed here, both with a de-centralized approach as well as a fully centralized approach that re-uses configuration concepts from software-defined networking (SDN). The current discussion can be followed through the public document archive of IEEE 802.1.


Maps Time-Sensitive Networking



Current status

Related projects:


Time Sensitive Networking (TSN): Converging networks for Industry 4.0
src: blog.ebv.com


References


Time Sensitive Ethernet controls critical data
src: environmentalengineering.org.uk


External links

  • IEEE 802.1 Time - Sensitive Networking Task Group
  • [1] - Real-time Ethernet - redefined(in German)

Source of the article : Wikipedia

Comments
0 Comments