Network Infrastructure in Large, Integrated Control Systems

21
Copyright © 2011 Rockwell Automation, Inc. All rights reserved. Insert Photo Here Rockwell Automation Process Solutions User Group (PSUG) November 14-15, 2011 Chicago, IL – McCormick Place West Network Infrastructure in Large, Integrated Control Systems Presented By: Kenny Martin Senior Process Programmer, Holcim US

Transcript of Network Infrastructure in Large, Integrated Control Systems

Page 1: Network Infrastructure in Large, Integrated Control Systems

Copyright © 2011 Rockwell Automation, Inc. All rights reserved.

Insert Photo Here

Rockwell Automation

Process Solutions User Group (PSUG)

November 14-15, 2011

Chicago, IL – McCormick Place West

Network Infrastructure in

Large, Integrated Control

Systems

Presented By: Kenny Martin

Senior Process Programmer,

Holcim US

Page 2: Network Infrastructure in Large, Integrated Control Systems

Introduction

• Holcim Ste Genevieve– Operation began in 2009

– World’s Largest Single Line Kiln

– Over 4 M mton/year of cement

– Kiln and Preheater tower, 2 Raw Mills, 4 Finish Mills, 2 Coal Mills

Page 3: Network Infrastructure in Large, Integrated Control Systems

PCS Overview

• Control Logix 5000 L63 Processors– 18 Processors in Main PCS

– 13 Sub-Control Systems

• FactoryTalk RSView SE – 4 Data Servers, 3 HMI Servers

– 24 Client in Main Control Room

– 150,000 Data connections

– 50,000+ Data connections in Panel Views, Local Touch Screens

– Rockwell Cement Library

• Network– Processors on Ethernet

– Flex I/O on CNET

– Intellicenters, PowerFlex VFDs on DNET

Page 4: Network Infrastructure in Large, Integrated Control Systems

Overview

• Network Hardware Infrastructure– Ethernet, Switches, Security, Stability

• Network Software Infrastructure– Active Directory, Operating Systems, Vulnerabilities

• PLC/HMI Infrastructure– Processors, Distributed I/O, Communications

Page 5: Network Infrastructure in Large, Integrated Control Systems

System Challenges

• Large Integrated Control System– Large amounts of remote I/O points and data connections

– Centralized control room

– Networks spread out over long distances

– Heavy communication loads between Processors, HMI, OPC

• End User Interfaces– Trending, Alarms, and Events

– Interface between Main PCS and Subsystems

Page 6: Network Infrastructure in Large, Integrated Control Systems

Hardware Infrastructure

• Ethernet Networks– Large control systems benefit from speed and cost of an integrated Ethernet

network

– Managing Ethernet Traffic

• Multicast traffic can disrupt unicast communication between PLCs and HMI Servers

• Managed Switches with IGMP Snooping may be necessary to prevent data buffering, broadcast, and multicast traffic that can slow down Ethernet traffic

– Media Convertors

• Small Unmanaged switches can create buffer points between fiber-copper conversion points

• Can result in dropped Ethernet traffic – use true media convertors, or direct fiber Ethernet cards

Page 7: Network Infrastructure in Large, Integrated Control Systems

Hardware Infrastructure

• Network Security at the Hardware Level

– PCS network should be isolated from networks non-critical to production or computers connected to the internet

• This can be achieved by either physically segmenting the network or doing so virtually over VLAN’s on a layer 3 switch

– Use a DMZ and firewall with restricted port access to allow minimal communication between PCS and DMZ

– Documented rules for PCS access and restrictions• USB Memory Sticks can be easy channels for viruses and malware to enter

the control system

• PCS computers should be restricted only to those needing direct access for production control – all other production non-critical business functions or data analysis can be done from another network

Page 8: Network Infrastructure in Large, Integrated Control Systems

Hardware Infrastructure

• Stability and Redundancy– Utilize teamed NIC’s on servers and etherchannels on managed switches to

optimize network traffic and provide cabling redundancy

– “Stacked” switches provide redundancy at the switch level

• Losing one physical switch will not cause the entire network to crash, remaining switches have parallel communication channels through the high-speed “stack” backbone

– Combo NIC teaming, primary/secondary server pair, etherchannels, and stacked switches to provide full redundancy at the all levels of the network

Page 9: Network Infrastructure in Large, Integrated Control Systems

Hardware Infrastructure

Page 10: Network Infrastructure in Large, Integrated Control Systems

Software Infrastructure

• Understand software requirements of the control system before setup of servers and clients– Many control system softwares now rely on Windows, and need specific settings for

registry, IIS, etc

– The operating system, PCS software installation, and patches for each must be installed in a particular order to ensure proper settings

• Faults of this nature can produce spurious loss of communication, and can lead to down time for reloading of OS and software – closely follow setup guidelines to avoid these issues

• As an example, if IIS is not configured properly, HMI servers may not communicate properly with clients – because the HMI software relies on registry settings made when IIS is configured, the solution requires a clean installation of the OS and the HMI software on the server, a time consuming task

Page 11: Network Infrastructure in Large, Integrated Control Systems

Software Infrastructure

• Active Directory and DNS

– Modern control systems rely on solid directory backbone for efficient communication

– All servers and client computer on the PCS network must be part of the Active Directory

• A large system can have multiple servers and depending on system limitations, dozens of clients and engineering stations

• Active Directory manages communication between all systems within the domain – critical to high speed and efficiency in a large network

– Proper DNS configuration• Works with Active Directory – forward and reverse lookup tables allow quick

name resolution to IP addresses on the network

• Allows for quick, efficient data connections between PLC, server and client

Page 12: Network Infrastructure in Large, Integrated Control Systems

Software Infrastructure

• Network security at software level

– Anti-Virus• Concerns arise over system performance, but anti-virus is critical to network

security

• Know which anti-virus your control system supports, and what applications should not be scanned that could otherwise negatively affect performance

– OPC• Common method of communication within and out of PCS, but can leave a

window open for malicious users

• Some OPC servers use a large range of ports, which can result in firewall ports being left wide open

• Select in OPC server that requires minimal ports for communication, or utilize OPC tunnelling

Page 13: Network Infrastructure in Large, Integrated Control Systems

Software Infrastructure

• Backup and Disaster Recovery Planning

– Have a scheduled, documented routine for backing up critical systems such as PLC programs and HMI servers

– A software providing a full bare-metal restore of a server can turn a potentially disastrous server loss into a quick, clean restore and run within a few hours

– Take time to periodically test the restoration of backups – this preparation can save critical time during a real crisis and expose any potential errors in the backup process

Page 14: Network Infrastructure in Large, Integrated Control Systems

Software Infrastructure

• Virtualization

– Control system manufacturers are beginning to make their systems compatible with virtualization software for servers and clients

– A virtualized network can save tens or hundreds of thousands of dollars in hardware and administration time over the life span of a large control system

– Redundancy on a virtual network has fewer points of failure and can take away redundancy responsibility from the control system software, which can be a bottleneck

– Backup restoration times dramatically reduced, especially when restoring a mature control system to newer hardware

Page 15: Network Infrastructure in Large, Integrated Control Systems

PLC Infrastructure

• Processor Scan Time

– Processor must have enough time to perform communications with the HMI under worst-case scenarios

• In PLC’s with heavily loaded processors due to I/O and code, periods of high stress communication can result in a lack of available time for communications, which can result in HMI screens freezing, PLC to PLC comm loss

• High stress communication periods can occur when PCS visibility is most critical – such as when an HMI server or a sub-system fails

– Structure PLC logic for efficient processor scans• PLC may have to be split into multiple periodic tasks that separate critical

digital logic, from less critical logic such as analog inputs that can afford slower update rates

Page 16: Network Infrastructure in Large, Integrated Control Systems

PLC Infrastructure

Page 17: Network Infrastructure in Large, Integrated Control Systems

PLC Infrastructure

• Cost-benefit analysis critical in design phase to determine how many PLC’s will be needed to accommodate balance between required code, I/O, and communicaiton

– Cramming too much into one PLC can have dramatic negative affects on communication and processor update times

• Consider control system libraries

– PCS libraries are often more code and tag intensive than traditional programs, just by nature of a library not all code in the library blocks will be used

– Plan for this additional scan time on the processor and be diligent when using library blocks

Page 18: Network Infrastructure in Large, Integrated Control Systems

PLC Infrastructure

• PLC-PLC communication

– TCP/IP Messages can be unreliable

• Messages between PLC’s over TCP/IP are an easy way to talk between

processors, but are susceptible to interruptions in a large system

• MSG’s are normally the lowest priority task that a PLC processor has, when

the PLC task is overloaded, MSG instructions can be bumped

• If MSG instructions must be used, stagger the execution of the blocks to

lower comms loading and reduce risk of comm dropout

– Critical data should be sent via Produce/Consume tags, or equivalent protocol that is treated as scheduled I/O

• This data cannot be interupted or skipped by the processor under any

circumstances

Page 19: Network Infrastructure in Large, Integrated Control Systems

PLC Infrastructure

• PLC Networks

– PLC Ethernet Segments

• Ethernet communication cards must have enough bandwidth to support the

amount of traffic on the network

• CPU utilization and bandwidth are good parameters to monitor in testing to

ensure that the ENET cards are able to support the network as it grows

– Field I/O Network

• The comm cards scanning the PLC’s field I/O network must be able to

support the scan time of the PLC

• Ex) to achieve a “true” 100 ms PLC scan time, the field I/O network needs to

be fast enough to update I/O in 50 ms, or twice as fast as the PLC scan time

• Heavily loaded I/O networks have to be split up physically to achieve

aggressive scan and update rates

Page 20: Network Infrastructure in Large, Integrated Control Systems

Summary

• PCS’s becoming more versatile, but require strong infrastructure design at all levels

• Traditional IT network and administration skills are becoming more and more critical as the PCS evolves

• Large systems can reach the constraint limit of a control system, understanding these limits can help to build a strong architecture

• Ethernet speed, durability, and cost becoming tough to beat in a well-designed control system, though requires careful planning to avoid technical and security susceptibilities

Page 21: Network Infrastructure in Large, Integrated Control Systems

Questions

Copyright © 2011 Rockwell Automation, Inc. All rights reserved.

Presented By: Kenny MartinSenior Process Programmer,Holcim US