The Real Deal with Real-Time Ethernet
Ethernet is not an ideal bus for industrial communication and synchronization. It does not provide power, it is not particularly rugged, and the typical protocols that run on it are filled with latency and jitter. Yet, Ethernet is everywhere and its widespread adoption and pervasive infrastructure entice many engineers to adopt it as their bus of choice for industrial communication. This ubiquity spawned many protocols and schemes that attempt to make Ethernet more deterministic and, thus, better suited for industrial applications. These schemes include soft real-time protocols, such as Ethernet/IP, and hard real-time protocols, such as SERCOS III, which is used for distributed motion control. Additionally, the emergence of the IEEE 1588 precision time protocol (PTP) makes it feasible to use Ethernet for synchronized distributed applications. IEEE 1588 provides a standard method to synchronize devices on a network with sub-microsecond precision. This article does not debate the pros and cons of Ethernet for industrial applications, such as machine vision, nor does it compare the new GigE Vision standard for gigabit Ethernet cameras to other vision buses, such as IEEE 1394 and Camera Link. Instead, this article accepts the pervasiveness of Ethernet and explores the ways in which engineers have overcome its inherent latency and jitter to implement deterministic communication between industrial devices
Latencies Limit Distributed Applications
Effective communication between industrial devices, such as automation controllers, vision systems, and smart motor drives, is the foundation of a flexible and efficient industrial process. Because these industrial devices are physically separate, engineers must consider the time it takes to transfer data between them. Moreover, when this latency is unknown or inconsistent, devices on the network must wait on data to be sent or received. This latency depends on a number of factors, including available bandwidth, network traffic, the number of Ethernet devices, and the Ethernet protocol chosen. In many applications, engineers can work around this issue by using techniques such as buffering to compensate for these limitations. For example, consider the buffering that takes place when streaming video or audio across the Internet to make up for the inconsistent transfer rate. While buffering data is suitable for casual multimedia applications, it is unacceptable for most industrial control applications, such as motion control. To meet the requirements of these control applications, the network itself must have a realtime (deterministic) response.
Inherent Non-Determinism of Standard Ethernet
The unknown latency characteristics of Ethernet networks are due to the inherent design of the IEEE 802.3 Ethernet standard. For many devices to transmit data on a single network, Ethernet uses the carrier sense multiple access, with collision detection (CSMA/CD) mechanism. This mechanism mandates that a network device must wait until no other device is transmitting before it can begin its own transmission. However, this does not prevent other devices from transmitting at the same moment, a condition commonly known as a network collision. Per the standard, the devices know when a collision occurrs and waits for a random period of time before attempting to retransmit. Furthermore, high-level Ethernet protocols, such as the commonly used transmission flow protocol (TCP), introduce additional handshaking to ensure that data arrives. Senders of TCP data wait until the receiver sends a positive acknowledgement (ACK). If the ACK is not received within a timeout period, the sender retransmits the data. Both the CSMA/CD access mechanism and flow control methods used by transmission protocols introduce timing uncertainties to Ethernet networks that make them inherently nondeterministic.
If determinism between industrial devices is so important, why are engineers migrating away from real-time buses, such as Profibus and DeviceNet, in favor of Ethernet? There are three reasons:
Making Ethernet Deterministic
To overcome the non-deterministic nature of Ethernet, engineers must place strict timing rules on the network. While there are several implementations of real-time Ethernet, most of them share two common elements:
One approach to implementing real-time Ethernet is incorporated in the latest version of National Instruments LabVIEW, a graphical programming environment for industrial measurement and control. LabVIEW uses a technology called the time-triggered network for deterministically transferring data across Ethernet. With the time-triggered network, two or more LabVIEW targets can transfer data deterministically across a private Ethernet network using standard network interface hardware. As shown in Figure 2, each device is also connected to a public network for normal network traffic and communication with non-real-time nodes. This two-wire topology provides redundancy to communications applications Other deterministic Ethernet protocols, such as Ethernet PowerLink, split the network cycle time for regular Internet traffic, such as TCP/IP, to occur after deterministically scheduled packets are sent and received. This method requires less wiring between real-time nodes, but typically requires a gateway to schedule traffic coming from outside the real-time network.
Scheduling Data Transfer with SharedMemory Blocks
One method of transferring data across a deterministic Ethernet network uses network scheduling through a shared memory scheme. In this scheme, every device on the network allocates a block of memory for every other device on the network including itself. Shared memory blocks take all the data from one device and send it to all other devices as one packet. Figure 3 represents a shared memory network of three nodes. Node A can only write to the A memory block, Node B can only write to the B memory block, and so on. In every network cycle, the data written from one node is sent or ýreflectedţ to all other nodes on the network. For the shared memory network to be deterministic, the engineer must schedule the reflection of each shared memory block so they do not overlap. To configure the network, one node is designated as the master node while all other nodes are slaves. At the start of each cycle, the master node sends a cycle start packet to the slave nodes. Once the start cycle packet ends, it can update shared memory blocks. One advantage of this shared memory method is that each block can hold multiple shared variables or data items. Thus, the transfer of one memory block can potentially distribute several pieces of data that might otherwise each need a separate handshaking sequence to get from one node to another. This use of the memory block reduces the over-head per transfer.
Closing Control Loops with Dedicated Slot Variables
An alternate way of transferring data deterministically is with the dedicated slot method. When using this method, an engineer explicitly schedules when each piece of data transfers during the network cycle. The main advantage of this method is that a loop can be closed during a single network cycle. Figure 4 shows an example of a distributed motion controller. In a practical implementation, the system can reliably close a control loop even when the network is busy. In the Figure 4 example, the deterministic network requires time during the network cycle to maintain clock synchronization. During this period, the system can read the current count of a motorÝs encoder. This position feedback then transfers across Ethernet to the trajectory controller, which calculates the positional error and transmits a correction command back across the network before the network cycle completes. Engineers use this method for closing any type of control loop across the network, allowing a decentralized system for control that reduces overall wiring and complexity.
Deterministic Network Challenges
The methods above eliminate the latency and jitter caused by packet collisions, but engineers still face challenges when packets are corrupted due to noise. When a packet is damaged the system must retransmit it, which spoils the predefined schedule of a real-time network. Engineers can reduce the odds of lost packets by incorporating redundant data transmission into the schedule, but this reduces the overall network throughput. Consequently, deterministic Ethernet protocols must include robust error checking to alert nodes of possible data loss, and users must decide how to handle data loss situations. From the technological advances and early successes of deterministic Ethernet, it is easy to forget that the journey toward standardized, real-time Ethernet is just beginning. For instance, while the new GigE Vision standard overcomes many of EthernetÝs inadequacies as a general-purpose peripheral bus, it does not include any provisions for deterministic image transfer, including collision avoidance. With that said, the existing infrastructure and clear performance gains of Ethernet compared to conventional industrial buses heralds a bright future for Ethernet as a viable, real-time bus for industrial control and communication.
Kyle Voosen manages the machine vision product line at National
Instruments. He holds a BS in electrical engineering
from Rice University.