IAS Template

download IAS Template

of 29

Transcript of IAS Template

  • University of Stuttgart

    Institute of Industrial Automation and Software Engineering

    Prof. Dr.-Ing. Dr. h. c. P. Ghner

    30.05.2014

    Master Thesis

    Himanshu Kevadiya 2576 MT

    Design of Multi-core Application with TwinCAT 3

    and Real-time Linux

    Supervisor: Prof. Dr.-Ing. Dr. h. c. Peter Ghner

    Prof. Dr.-Ing.Michael Weyrich

    Dipl.-Ing. Michael Abel (ISWUniversitt Stuttgart)

  • i

    Table of Contents

    Table of Figures ............................................................................................................................. ii

    Table of Tables ............................................................................................................................. iii

    Abbreviations ................................................................................................................................ iv

    Terminology ................................................................................................................................... v

    Zusammenfassung ........................................................................................................................ vi

    Abstract ........................................................................................................................................ vii

    1 Introduction ............................................................................................................................ 8

    2 State of the Art and Basics .................................................................................................... 9

    2.1 State of the current Research .......................................................................................... 9

    2.2 Introduction to Application Model ................................................................................. 9

    2.3 Multi-core Processor ..................................................................................................... 10

    2.4 Operating Systems ........................................................................................................ 12

    2.4.1 Processes and Threads ...................................................................................... 12

    2.4.2 Scheduling ........................................................................................................ 13

    2.4.3 Synchronization ................................................................................................ 16

    2.4.4 Memory Management ....................................................................................... 16

    2.5 Introduction to TwinCAT 3 .......................................................................................... 21

    2.5.1 Extended Automation Architecture .................................................................. 22

    2.5.2 Extended Automation Engineering ................................................................... 22

    2.5.3 Extended Automation Runtime ........................................................................ 23

    2.5.4 Extended Automation Performance .................................................................. 24

    Literature Formatvorlage 1Ohne Nummer (aber im Verzeichnis) ........................................... 26

  • ii

    Table of Figures

    Figure 2.1: Application Model ...................................................................................................... 10

    Figure 2.2: Submodel Operations ................................................................................................. 10

    Figure 2.3: Multi-core Processor ................................................................................................... 11

    Figure 2.4: Parallelism on Multi-core Processor ........................................................................... 11

    Figure 2.5: Operating System Placement ...................................................................................... 12

    Figure 2.6: Processes and Threads ................................................................................................ 13

    Figure 2.7: Fixed Partitioning of 64-MB Memory ........................................................................ 17

    Figure 2.8: Dynamic Partitioning .................................................................................................. 18

    Figure 2.9: Memory Paging .......................................................................................................... 19

    Figure 2.10: Memory Paging ........................................................................................................ 19

    Figure 2.11: Memory Segmetation ............................................................................................... 20

    Figure 2.12 Finding a Segment in Memory .................................................................................. 20

    Figure 2.13 Page Fault and Address-Translation Scheme ............................................................ 21

    Figure 2.14: TwinCAT extended Automation Architecture ......................................................... 22

    Figure 2.15: TwinCAT Extended Automation Runtime ............................................................... 23

    Figure 2.16 Extended Automation Runtime ................................................................................. 23

    Figure 2.17: Extended Automation Multi-core Performance ....................................................... 24

  • iii

    Table of Tables

    Table 2.1: Tabellenbeschriftung (Formatvorlage: Tabellenbeschriftung)Error! Bookmark not defined.

  • iv

    Abbreviations

    FCFS First Come First Serve

    CNC Computer Numerical Control

    I/O Input Output

    VS Visual Studio

    CPU Central Processing Unit

    PC Personal Computer

    PTP Point to Point

    NC Numeric Control

    OS Operating Systems

    MS Microseconds

    EDF Earliest Deadline First

    RMS Rate Monotonic Scheduling

    RAM Random Access Memory

    MB Mega Byte

    LRU Least Recently Used

    PLC

  • v

    Terminology

    Aktor Einheit zur Umsetzung von Stellinformation tragenden Signalen geringer

    Leistung in leistungsbehaftete Signale einer zur Prozessbeeinflussung

    notwendigen Energieform

    Synchronization

    Internal

    fragmentation

  • vi

    Zusammenfassung

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper.

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper.

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper. Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper

    Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper Textkrper.

  • vii

    Abstract

    bersetzung der Zusammenfassung

    Formatvorlage: Textkrper

  • 8

    1 Introduction

  • 9

    2 State of the Art and Basics

    This chapter illustrates the state of current research of multi-core simulation with TwinCAT and

    real-time Linux. A detailed explanation on fundamentals of operating systems real time Linux

    and TwinCAT 3 is illustrated. The terms CPU and core are used interchangeably throughout the

    discussion.

    2.1 State of the current Research

    Real-time simulation needs to perform computations in available time slot. Simulations are

    already being performed with real-time Linux on multi-core platform. The system performance

    with real-time Linux is considerably efficient and optimum. Beckhoff Automation GmbH

    introduced a PC-based automation Technology in the form of TwinCAT 3 automation suite

    [beck21]. TwinCAT 3 supports real-time capabilities. Application and simulation tasks can be

    distributed among several CPUs to take full advantage of the multi-core platform in TwinCAT

    3. As far as real-time applications are concerned on multi-core platform, no research has been

    done to analyze system behavior, execution load and latencies of the tasks to compare with real-

    time Linux.

    2.2 Introduction to Application Model

    To take full advantage multi-core core platform based real-time systems, an application model is

    designed with multiple operations. Figure 2.1 shows the application model with several

    submodels. Application model has multiple submodels like rectangular signal generator, adder,

    integrator, multiplier etc. Rectangular signal generator generates a rectangular signal with

    specific cycle period and writes it to a memory location known as signal2. In the same way,

    adder adds two signals, integrator integrates the signal and multiplier multiplies the two signals.

    Sig2, sig3 and so on are the memory locations in global memory. Calculated data are written to

    and data is read from these signal locations. Several submodels are created like this in a final

    model two take the advantage of multi-core based real-time system.

    Submodels perform three operations as shown in figure 2.2. Initially it writes data to signals by

    executing write data operation, then data is executed in execute data operation and at last new

    state of the submodel is calculated in calculate new state operation. These operations are

    repeated cyclically. Moreover, another possibility of the sequence of operation can also be like

    the operation execute data is performed first and then calculate new state and at last write data

    operation. Theoretical analysis shows that, with cycle time of 1ms, all the three operations of

    each submodel are executed in the 10s or 100s of microseconds depending on the number of

    other operating systems task running at that moment.

    A detailed discussion about the execution concept of application model is discussed under the

    chapter-4 conception.

  • 10

    Figure 2.1: Application Model

    Figure 2.2: Submodel Operations

    2.3 Multi-core Processor

    Processor is a unit or component in an operating system that read the program instructions and

    executes them. Multi-core processor is a processor with multiple cores.

    On a single core processor, only one instruction can be executed at a time that is known as

    uniprogramming. In real-time simulation, there is a specific time slot available in which number

    of instructions/tasks must be executed. Also, because of heat up of the chip in a processor and

    transmission delays, single core processor cannot be speed up above certain frequencies. Hence,

    multi-core processors are used.

  • 11

    Figure 2.3: Multi-core Processor

    Figure 2.4: Parallelism on Multi-core Processor

    Figure 2.3[mcore21] shows a multi-core based processor with 4 cores. Depending on the

    processor type, there are varied number cores available on a processor. Multiple tasks/threads

    can be executed on different cores at the same time that is known as multi-programming.

    Several factors need to be considered while designing a multi-core based program like data

    dependencies, synchronization and several other factors of parallel programming that will be

  • 12

    discussed in chapter 3. Each core in Figure 2.3 is having its own individual memory. Global

    memory is shared among all the cores as shown. Other off-chip components and I/O devices can

    be connected using bus interface.

    Figure 2.4[mcore22] shows execution of tasks on a single-core processor and multi-core

    processor. On single-core processor, only one thread can be executed at a time and hence task1

    to task4 are executed sequentially and that can consume many number of CPU cycles. On the

    other hand on a multi-core processor system, four cores executing task1 to task4 and as can be

    seen, in a short duration of time with less number of cycles as compared to single-core

    processor, all the tasks can be executed. Hence multi-core processor is used for the time-critical

    tasks.

    2.4 Operating Systems

    Operating system is a program that interacts directly with computer hardware to provide

    computer functionalities and services. As shown in Figure 2.5, operating system acts as an

    interface between application software and hardware of the system. Without operating systems,

    user requests and other application software cannot interact with hardware and computer system

    becomes meaningless. Operating system manages hardware devices such as monitor, mouse,

    keyboard and other peripherals as per the application software, device drivers, user requests etc.

    In this section, the operating system concepts like processes and threads, scheduling,

    synchronization, memory management etc are discussed.

    Figure 2.5: Operating System Placement

    2.4.1 Processes and Threads

    Process is a program in execution. Examples of process are www-browser, OS kernel, OS shell

    etc. Operating system manages the processes. Operating system allocates processor, main

    memory, I/O modules, timers etc to process. Process scheduling mechanism, Interprocess

    communication, synchronization between different processes, watch for processor utilization

  • 13

    and many other mechanisms of a process are managed by operating system. Process has

    resource ownership that is virtual address space to hold the process image of program, data,

    stack and attribute. Once resource is allocated, operating system shields processes from

    interfering with other resources.

    Threads are lightweight processes. A process has minimum one thread. Figure 2.5 shows that it

    is possible to have multiple threads per process. Thread is a unit of dispatching. Thread has an

    execution state, stack and context that are saved when thread is not running. Threads have

    access to the memory and resources of its process. Moreover, all the threads under same process

    share memory and resources of a process. Therefore it requires less time to create and terminate

    a thread than a process and also switch between two threads within the same process is fast.

    Communication between threads does not involve kernel.

    Figure 2.6: Processes and Threads

    Figure 2.6[OS22] shows one thread per process that is known as single threading. Multiple

    threads within a single process is known as multithreading. Therefore multiple threads can be

    distributed among several processor cores and independent tasks can be executed in parallel to

    support parallel programming.

    2.4.2 Scheduling

    Resource constraint is the main reason because of which operating system selects only one

    process in ready state for the execution on a single core system. This concept is known as

    scheduling. In this section, uniprocessor scheduling, real-time scheduling and multiprocessor

    scheduling are discussed.

  • 14

    2.4.2.1 Uniprocessor Scheduling

    Operating systems allocates resources to a process for execution based on scheduling criteria

    and optimization goals as described below.

    CPU utilization to keep CPU as busy as possible.

    To maximize number of processes that complete their execution per unit time that is to

    increase the throughput.

    Response time to minimize the amount to time from submission of request until the

    first response. That is to minimize total time taken for execution and waiting time in

    ready queue.

    Turnaround time to minimize the total time taken by a process for execution including

    all the waiting time.

    Fairness execution as per the priorities of the processes and to avoid starvation.

    Schedular efficiency to keep minimum overhead, fast context switch and calculation of

    priorities.

    Decision Mode

    Decision modes for the scheduling are of two types.

    Preemptive - In this mode, operating system can may interrupt the currently running

    process and move it to the ready state. The advantage in this mode is that a single

    process cannot monopolize the processor for very long.

    Non-preemptive Operating system cannot interrupt running process until it finishes its

    execution or blocks itself for I/O.

    Scheduling Methods

    Operating system schedules the the process to run based on the scheduling method. There are

    several scheduling methods and some of them are described here. First-Come-First-Serve

    (FCFS) scheduling method is the simple and easy to implement. As the name suggests, first

    incoming task is executed first followed by the tasks ub the ready queue. It is a not-preemptive

    scheduling method. Round-Robin is advanced version of FCFS as it allows a process to run only

    for a specified time slice or quantum. Premeption is based on quantum that is generally 10-100

    ms. Shortest process first scheduling method allows process with shortest execution first. It is

    non-preemptive scheduling method. In priority scgeduling method, process with heighest

    priority is allowed to be executed first. Some of the priority scheduling methods are highest

    response ratio next and fair-share scheduling.

    2.4.2.2 Real-time Scheduling

    When the correctness of result of a system is dependent on functional as well as timethat the

    result is delivered is known as real-time systems. In real-time scheduling, processes must be

    scheduled as per the time requirements otherwise the delivered result may be not useful.

    Time constraints in real-time scheduling may be hard or soft depending on requirement.

  • 15

    Soft constraints Single time failure may be accepted, however this may reduce the

    usefulness of the computation. For example, in video streaming, if some frames are

    missed then it does not entirely affect the result but affects the video streaming

    continuation. In the same way, online ticket reservation system is the example of soft

    constrains of real-time systems.

    Hard constraints Single time failure makes the computation results useless. In critical

    real-time systems, system can not continue to function. Safety critical are the systems in

    which serious damage or loss of life may happen. Non crotical system functions with

    limited performance.

    A process is said to be schedulabe on a real-time system with one processor if processor

    utilization does not exceed 1 as mentioned in following criteria:

    Where m stands for number of periodic tasks; C is the computation time of the task; T is the

    time period of the task.

    Earliest Dealine First - EDF is a method of real-time scheduling. In EDF, a task with earliest

    deadline is scheduled first. Highest priority of a task is defined by deadline. It is optimal for

    dynamic set of prioroties on uniprocessor platform.

    Rate Monotonic Scheduling RM schedules tasks with shortest period first. It is optimal for

    static assignment of priorities on uniprocessor platform. There are several other methods

    available for the real-time scheduling.

    2.4.2.3 Multi-processor Scheduling

    Schduling of tasks on a system with multiple processor cores sharing full or limited access to a

    common RAM is known as multi-processor scheduling. Complex design issues need to be

    addressed to achieve multi-processor scheduling.

    In per-processor ready-queues, a process is permanenetaly assigned to a processor. Whenever a

    process is assigned to the same processor, it is executed faster because first level cache

    preserves locality. However the disadvantage is there may be an idle processor while another

    processor has a backlog. Gang scheduling uses concept of pre-processor ready queues to assign

    threads to a particular processor.

    In global ready queues, a process can be assigned to any available processor. However, if there

    is a single process running on multi-processor system then it adds process migration time

    because of no availability of locality of cache and hence inefficient design may result in

    bottleneck. Load sharing uses global ready queues concept for the migration of process/threads

    dynamically.

  • 16

    2.4.3 Synchronization

    Many processes/threads interact via shared memory or message passing. In this kind os

    situation, result of interactions is not guaranteed to be deterministic. Moreoften, processes write

    to the same addresses concurrently. Hence final result depends on the order of the operations.

    A coordination mechnisam between multiple processes/threads to access the data structures,

    shared memory and to ensure consistency is called as Synchronization.

    2.4.3.1 Mutual Exclusion (Critical-Section) Problem

    When multiple processes compete to use some shared data or resources then concept of mutual

    exclusion needs to be addressed. Generally, each process has a code segment known as critical

    section in which shared data is excessed. Synchronization concept should ensure that never two

    processes are allowed to execute in theis critical section at the same time. Access to the critical

    section must be an atomic action.

    Solution to critical-section problem is mutual exclusion. When only one process is allowed to

    execute in its critical section is known as mutual exclusion. Another criterion that should be

    satisfied is no process can be prevented forever from entering critical-section.

    2.4.3.2 Synchronization Concepts

    There are various concepts to achieve synchronization. Popular concepts are using semaphore,

    condition variables, mutex variables, locks, message passing, synchronization barriers etc.

    Semaphores are the signal variables that can take 0 or positive integer value. There are two

    operations V(S) and P(S). Operation V(S) increases the value of semaphore variable S by one

    and operation P(S) decreases the samphore variable value S by one. The sequence of operations

    should be used in processes in such a way that synchronization is achieved.

    In the same way mutex variables and condition variables are used to ensure synchronization.

    Synchronization barriers are the functions that can be used that act as a barrier for all the

    threads/processes to wait for one another at the barrier. Once all the processes/threads arrive at

    the barrier then execution continues.

    2.4.4 Memory Management

    In software development, ideal memory is large, fast and non volatile. Cache is a small, fast and

    expensive memory. Main memory is of medium size and speed whereas disk storage is of

    gigabytes with slow transfer of data and low price.

    Requirements for memory management are

    Relocation - Change of physical placement of a process.

    Protection - Access to the physical memory should be restricted bseed on requirement.

  • 17

    Sharing Allow processes to share memory location based on requirement.

    Logical organization - Support to organize the computer programs.

    Physical Organization - Support to efficient utilization of hardware.

    2.4.4.1 Memory allocation schemes

    This section discusses memory allocation schemes such as fixed partitioning, dynamic

    prtitioning, paging and segmentation.

    Fixed Partition

    Any program occupies entire partition of memory irrespective of its size is fixed partitionning.

    Figure 2.7 shows fixed partitioning of entire 64 MB size memory with operating systems taking

    8MB. It is clearly observed that allocated memory can be larger than the requested memory and

    that can not be utilized by other processes. Because of this internal fragmentation occurs.

    Figure 2.7: Fixed Partitioning of 64-MB Memory

    Dynamic Partition

    Dynamic partitioning allocats as much memory as required by the process as in Figure

    2.7[OS23] to solve the problem of internal fragmentation in fixed partitioning. However, there

    can be small segments of memory that can not be sufficient enough for the biger size programs

    even if total free memory is more than the site of programs. Because of this, danymic

    partitioning introduces the problem of external fragmentation. Memory manager must use

    compaction to shift the processes for defragmentation.

  • 18

    Figure 2.8: Dynamic Partitioning

    Paging

    External fragmentation is solved by the paging of memory. In paging, entire memory is

    partioned into small equal-size frames. Also, each process is divided into the same size pages or

    chunks as indiacted in Figure 2.9. Operating system maintains page table for each process that

    contains frame location for each page in the process and memory address that is page number

    and offset within page.

    Allocation of frames for process A, B, C and D is shown in figure 2.10. In first step, processs A

    and B are already loaded and then process C is loaded. In the second setp, Process B is swap

    out. Loading of the process D with higer memory requirement occupies emptied memory of

    process B plus available memory after process C. In this way, it prevents external fragmentation.

    Program references to a memory more than one time. Program contains virtual address that is

    translated into physical address with the help of paging mechanism. There is alos two-level

    paging mechanism to obtain large linear address space without having to buy more physical

    address.

  • 19

    Figure 2.9: Memory Paging

    Figure 2.10: Memory Paging

    Segmentation

    Segmentation is a memory management scheme that supports user view of memory/program

    that is a collection of segments. A segment is a logical unit such as main program, function,

    procedure, common block, local and global variable, stack etc as in Figure 2.11.

    Figure 2.12 illustrates how segmentation mechanism finds a segment in mai memory. Basically,

    virtuall address in a program contains segment number and segment offset. With the help of

    segment number, segment table gives length and base to calculate final segment in the main

    memory. Total address space can exceed the size of physical memory. Another advantage of

    segmentation is that the tables whose size fluctuates can be accommodated easily. Thus

    segmentation allows breaking of data and programs into logically independent address spaces

    and to aid sharing and protection.

  • 20

    Figure 2.11: Memory Segmetation

    Figure 2.12 Finding a Segment in Memory

    2.4.4.2 Virtual Memory

    Memory management mechnisam that is virtual memory maps virtual addresses into physical

    addresses present in main memory. Generally, operating system brings a few pieces of the

    program into main memory. The protion of memory that is in main memory is known as

    Resident set.

  • 21

    Whenever an address that is needed is not in the main memory, page-fault interrupt is generated.

    In such case, OS place the process in blocking state and issues a disk I/O request. And then

    another process is dispatched.

    With each page-table entry a valid-invalid bit is associated. 1 if page is in memory and 0 if page

    is not in memory. During address translation, if valid-invalid bit in page table entry is 0 then

    page fault interrupts to OS. As shown in Figure 2.13, whenever page fault interrupts OS then OS

    must get empty frame. Swape in that page into the frame and reset the validation bit in tables.

    After this the instruction is restarted.

    Figure 2.13 Page Fault and Address-Translation Scheme

    Some page replacement stratagies such as FIFO replacement algorithm, optimal replacement

    algorithm, LRU replacement algorithm which results in minimum number of page faults. Here,

    page fault forces to remove the page or to make room for incoming page. Generally oftern used

    page is not choosen for replacement as probability of bringning it back is higher.

    2.5 Introduction to TwinCAT 3

    TwinCAT 3 is a PC-based automation control technology introduced by Beckhoff Automation.

    It supports modular and flexible software architecture for the effiecient engineering. TwinCAT 3

    is used to turn a PC-based system into real-time control system with multiple NC, CNC, PLC

    etc. All kind of control applications can be realized with TwinCAT 3. It is integrated into Visual

    Studio to provide one software programming and configuration tool. It supports multiple

    programming languages like C/C++ for real-time applications, PLC programming and link to

    Matlab/Simulink. TwinCAT 3 also supports multi-core applications and 64-bit systems. Some

    of the features of TwinCAT 3 are discussed here in detail as following.

  • 22

    2.5.1 Extended Automation Architecture

    TwinCAT 3 supports extended automation architecture as shown in Figure 2.14[beck22]. With

    support to 32/64 bit windows environment, TwinCAT 3 is completely integrated into visual

    studio to take advantage of well-known system manager for configuration and usage of different

    programming languages as mentioned earlier.

    TwinCAT 3 has its own real-time kernel that supports real-time capabilities to give higher

    priority to real-time tasks. As shown, it also supports to modules from different programming

    envirionment and provides capabilities of linkage to Matlab/Simulink.

    Figure 2.14: TwinCAT extended Automation Architecture

    2.5.2 Extended Automation Engineering

    TwinCAT 3 engineering environment is based on visual studio that supports third party

    programming tools. The various modules in this environment can exchange information and

    can call each other independent in which programming language they are written. The generated

    modules can be linked to rich set of toolboxes of Matlab/Simulink. This feature advantages of

    automatic code generation, building of control circuits, simulation and optimization of modules.

    Non-real time and real-time obect oriented programming makes it easy to use under single

    platform. It provides extended support for debugging of C++ programs that run in real-time with

    use of breakpoints, watch lists and call stacks. Therefore it is an extended automation

    engineering technology as can be seen in Figure 2.15[beck22].

  • 23

    Figure 2.15: TwinCAT Extended Automation Runtime

    2.5.3 Extended Automation Runtime

    Figure 2.16 Extended Automation Runtime

  • 24

    Extended runtime support is provided in TwinCAT 3. As in Figure 2.16[beck22], real-time tasks

    call the the generated modules cyclically and other modules can be called from these associated

    modules. Another advantage is that the generated modules from different programming

    environment can be compiled with different compilers and can be completely independent of

    one another.

    2.5.4 Extended Automation Performance

    Multi-core support with real-time capabilities makes TwinCAT 3 a useful PC-based automation

    tool. As shown in Figure 2.17[beck22], there are multiple cores those can be associated to

    different tasks in TwinCAT 3. Each task running on different core makes the tool ideal for

    complete utilization of available resources. Hence it gives all the advantages of parallel

    execution for better performance.

    Figure 2.17: Extended Automation Multi-core Performance

    2.6 Real-time Linux

  • 25

  • 26

    Literature Formatvorlage 1Ohne Nummer (aber im Verzeichnis)

    1 Autor: 4 Buchstaben Name Jahreszahl 2 Stellen: Ring00

    2 Autoren: 2 Buchstaben pro Name Jahreszahl 2 Stellen: RiFl00

    3 Autoren: erster Buchstabe pro Name Jahreszahl 2 Stellen: RFD00

    4 Autoren: erster Buchstabe pro Name Jahreszahl 2 Stellen: RFDB00

    >4 Autoren: erster Buchstabe der ersten 3 Namen + Jahreszahl 2 Stellen: RFD+00

    mehr als eine

    Verffentlichung

    pro Jahr

    Ring00a, Ring00b

    [beck21] Beckhoff Automation:

    http://www.beckhoff.com/english.asp?twincat/twincat-3.html

    [beck22] Beckhoff Automation:

    http://download.beckhoff.com/download/document/catalog/Beckhoff_TwinCAT3_

    042012_e.pdf

    [beck03] Beckhoff Automation:

    http://infosys.beckhoff.com/index.htm

    [mcore21] Washington University in St. Louis:

    http://www.cse.wustl.edu/~jain/cse567-11/ftp/multcore.pdf

    [OS22] Cardiff School of Computer Science & Informatics:

    http://www.cs.cf.ac.uk/Dave/C/node29.html

    [OS23] University of Western Australia:

    http://undergraduate.csse.uwa.edu.au/units/CITS1002/lectures/lecture14/06.html

  • 27

    Declaration

    I declare that I have written this work independently and respected in its preparation the

    relevant provisions, in particular those corresponding to the copyright protection of external

    materials. Whenever external materials (such as images, drawings, text passages) are used in

    this work, I declare that these materials are referenced as follows (i.e.: quote, source) and,

    whenever necessary, consents from the author to use such materials in my work were

    obtained.

    Signature:

    Stuttgart,