Networking threads

35
NETWORKING THREADS PREPARED BY Nilesh Pawar

description

This ppt contains information about network threads. Gives description about the concepts of multithreading.

Transcript of Networking threads

Page 1: Networking threads

NETWORKING THREADS

PREPARED BY

Nilesh Pawar

Page 2: Networking threads

INTRODUCTION TO THE THREAD :

A thread is a flow of execution through the process code. A thread is also called a light weight process. Threads provide a way to improve application performance

through parallelism. It also improves the performance of Operating System. Each thread represents a separate flow of control. All threads within a process share same global memory.

Page 3: Networking threads

STATES OF THE THREAD :

Basically each and every threat is having a state out of following four states :

• CREATE STATE

• RUNNING STATE

• BLOCKED STATE

• DEAD STATE

Page 4: Networking threads

CREATE STATE :CREATE STATE denoted that the new thread is created. This is the new birth of any threat.

This state can be denoted with the function start().

RUNNING STATE :This state states that the currently thread is running. In this state the thread is busy with the CPU.

This state can be denoted with the function run().

BLOCKED STATE :During the running of any thread, if any higher priority thread comes then that thread may go in blocked state. So it waits for that higher priority thread in running queue.

DEAD STATE :When thread completes it’s work with CPU it goes in Dead state. And this is the actual death of the thread.

Page 5: Networking threads

THREAD LIFE CYCLE :

CREATED RUNNABLE

BOLCKED

DEAD

Thread()

Start()

Notify()

Sleep()Wait()

Run() method terminates

Page 6: Networking threads

TYPES OF THREADS :

Basically there are two types of threads which are given as follows :

THREADS

KERNEL LEVEL THREAD

Page 7: Networking threads

KERNEL LEVEL THREAD

Operating System managed threads acting on kernel, an operating system core.

Kernel-level threads will guarantee multiple processor access but the computing performance is lower due to load on the system.

These are more expensive than user-level threads.

Page 8: Networking threads
Page 9: Networking threads

ADVANTAGES

Has full knowledge of all threads.

Especially good for applications that frequently block.

No runtime system is needed for these threads.

DISADVANTAGES

More expensive.

They are slow and inefficient.

It require a full thread control block (TCB) for each thread to maintain information about threads.

More complex.

Page 10: Networking threads

USER LEVEL THREAD

These are the User managed threads. These threads within a process are invisible to the

operating system. User-level threads have extremely low overhead, and can

achieve high performance in computation. In this some threads to gain exclusive access to the CPU

and prevent other threads from obtaining the CPU. Finally, access to multiple processors is not guaranteed

since the operating system is not aware of existence of these types of threads.

Page 11: Networking threads
Page 12: Networking threads

ADVANTAGES

Does not require modification to operating systems.

Simple Representation.

Simple Management.

Fast and Efficient.

Relatively cheaper than kernel-level threads.

DISADVANTAGES

User-Level threads are not a perfect solution as with everything else.

They are not well integrated with the OS.

There is a lack of coordination between threads and operating system kernel.

Page 13: Networking threads

HOW DOES A THREAD RUN ?

The thread class has a run() method, run() is executed when the thread's start() method is invoked

The thread terminates if the run method terminatesTo prevent a thread from terminating, the run method must not endrun methods often have an endless loop to prevent thread termination

One thread starts another by calling its start method, The sequence of events can be confusing to those more familiar with a single threaded model.

Page 14: Networking threads

WHAT IS MULTITHREADING

Multithreading is similar to multi-processing. Multithreading is a technique by which a single set of code

can be used by several processors at different stages of execution.

Multithreading is the ability of a program to manage multiple requests by the same user.

Each user request for a program or system service (and here a user can also be another program) is kept track of as a thread with a separate identity.

Each process has its own address/memory space. The OS's scheduler decides when each process is

executed.

Page 15: Networking threads

EXAMPLE OF MULTITHREADING

Page 16: Networking threads

WHY TO USE MULTITHREADING

In Application, one thread of execution must do everything. If an application has several tasks to perform, those tasks

will be In a single threaded performed when the thread can get to them.

A single task which requires a lot of processing can make the entire application appear to be "sluggish" or unresponsive.

In a multithreaded application, each task can be performed by a separate thread

If one thread is executing a long process, it does not make the entire application wait for it to finish.

Page 17: Networking threads
Page 18: Networking threads

MULTITHREADING MODELS

Basically there are 3 types of relationships in multithreading models. And they are as follows :

One to one relationship. Many to one relationship. Many to many relationship.

Page 19: Networking threads

ONE-TO-ONE RELATIONSHIP

The one-to-one model creates a separate kernel thread to handle each user thread.

One-to-one model overcomes the problems listed above involving blocking system calls and the splitting of processes across multiple CPUs.

Most implementations of this model place a limit on how many threads can be created.

Linux and Windows from 95 to XP implement the one-to-one model for threads.

Page 20: Networking threads

ONE-TO-ONE RELATIONSHIP

Page 21: Networking threads

MANY-TO-ONE RELATIONSHIP

In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.

Thread management is handled by the thread library in user space, which is very efficient.

Because a single kernel thread can operate only on a single CPU, the many-to-one model does not allow individual processes to be split across multiple CPUs.

Green threads for Solaris and GNU Portable Threads implement the many-to-one model in the past, but few systems continue to do so today.

Page 22: Networking threads

MANY-TO-ONE RELATIONSHIP

Page 23: Networking threads

MANY-TO-MANY RELATIONSHIP

Users have no restrictions on the number of threads created.

Blocking kernel system calls do not block the entire process.

Processes can be split across multiple processors. The many-to-many model multiplexes any number of user

threads onto an equal or smaller number of kernel threads, combining the best features of the one-to-one and many-

to-one models

Page 24: Networking threads

MANY-TO-MANY RELATIONSHIP

Page 25: Networking threads

THREADING ISSUES There are some issues related to threading those are as

follows : The fork( ) and exec( ) System Calls. Signal Handling. Thread Cancellation. Thread Specific Data. Scheduler Activations.

Page 26: Networking threads

THE FORK() AND EXEC() SYSTEM CALLS

The fork() is used to create a separate duplicate process , in some unique system.

The fork() has two versions one that duplicate all threads an another that duplicate one the thread that invoke the fork() .

If a thread invoke the exec() the program specified in parameter to exec() will replace the entire process including all threads.

If exec() is called immediately alpha forking the duplicating all thread in unnecessary as the program specified in parameter to exec() will replace the process.

Page 27: Networking threads

SIGNAL HANDLING

The best choice may depend on which specific signal is involved.

UNIX allows individual threads to indicate which signals they are accepting and which they are ignoring. However the signal can only be delivered to one thread, which is generally the first thread that is accepting that particular signal.

Windows does not support signals, but they can be emulated using Asynchronous Procedure Calls ( APCs ).

APCs are delivered to specific threads, not processes.

Page 28: Networking threads

THREAD CANCELLATION Threads that are no longer needed may be cancelled by

another thread in one of two ways: Asynchronous Cancellation cancels the thread

immediately. Deferred Cancellation sets a flag indicating the thread

should cancel itself when it is convenient. It is then up to the cancelled thread to check this flag periodically and

exit nicely when it sees the flag set. ( Shared ) resource allocation and inter-thread data

transfers can be problematic with asynchronous cancellation.

Page 29: Networking threads

THREAD SPECIFIC DATA

Most data is shared among threads, and this is one of the major benefits of using threads in the first place.

However sometimes threads need thread-specific data also. Most major thread libraries ( pThreads, Win32, Java ) provide

support for thread-specific data, known as thread local. storage or TLS. Note that this is more like static data than

local variables , because it does not cease to exist when the function ends.

Page 30: Networking threads

SCHEDULER ACTIVATIONS Many implementations of threads provide a virtual processor

as an interface between the user thread and the kernel thread, particularly for the many-to-many or two-tier models.

This virtual processor is known as a "Lightweight Process", LWP.

There is a one-to-one correspondence between LWPs and kernel threads.

The number of kernel threads available, ( and hence the number of LWPs ) may change dynamically.

The application ( user level thread library ) maps user threads onto available LWPs. kernel threads are scheduled onto the real processor(s) by the OS.

Page 31: Networking threads

THREAD SCHEDULING

Scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth).

There are two methods of thread scheduling : Contention scope Pthreads

Page 32: Networking threads

CONTENTION SCOPE One distinction between user level and kernal level thread

live in how they are schedule. All system implementing m to one and m to m modules ,

the thread library schedules user level threads to run off an available LWP a scheme known as process content scope.

To decide which kernal thread to schedule on to a cpu the kernal uses system contention scope that is scheduler select the runable thread with the highest priority to run.

Page 33: Networking threads

P-THREADS

The POSIX standard ( IEEE 1003.1c ) defines the specification for pThreads, not the implementation.

PThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain shareware for Windows.

Global variables are shared amongst all threads. One thread can wait for the others to rejoin before

continuing. PThreads begin execution in a specified function.

Page 34: Networking threads

REFERENCES

Os concept 8th addition of Abraham silberschatz. Wikipedia.com Slide sharer.com www.google.com

Page 35: Networking threads

THANK YOU..