aosminiproject.files.wordpress.com€¦  · Web viewRoll No.: 19mcmb19. Name: Mohd. Shehwaz....

104
Roll No.: 19mcmb19 Name: Mohd. Shehwaz Assignment-1 Q.1 These are some of the commands I know from Linux operating system. ls - If you run ls, the program will list the contents of the current directory. cp sourcefile targetfile - Copies sourcefile to targetfile. mkdir directoryname - Creates a new directory. more filename - Displays content of the file. cd directoryname - Changes the directory rm filename - Removes the specified files from the file system. rmdir directoryname - Deletes the specified directory, provided it is already empty. cat filename - The cat command displays the contents of a file, printing the entire contents to the screen. cd.. - Comes out from the current directory. touch filename - Creates blank file gcc filename - To compile C-program file Q.2 Comparision of images: 1. Colour Buckets: With two pictures, scan each pixel and count the colours. For example you might have the 'buckets': white red blue green black Every time you find a 'red' pixel, you increment the red counter. Each bucket can be representative of spectrum of colours, the higher resolution the more accurate. 2. Regions of Interest:

Transcript of aosminiproject.files.wordpress.com€¦  · Web viewRoll No.: 19mcmb19. Name: Mohd. Shehwaz....

Roll No.: 19mcmb19

Name: Mohd. Shehwaz

Assignment-1

Q.1

These are some of the commands I know from Linux operating system.

ls - If you run ls, the program will list the contents of the current directory.

cp sourcefile targetfile - Copies sourcefile to targetfile.

mkdir directoryname - Creates a new directory.

more filename - Displays content of the file.

cd directoryname - Changes the directory

rm filename - Removes the specified files from the file system.

rmdir directoryname - Deletes the specified directory, provided it is already empty.

cat filename - The cat command displays the contents of a file, printing the entire contents to the screen.

cd.. - Comes out from the current directory.

touch filename - Creates blank file

gcc filename - To compile C-program file

Q.2

Comparision of images:

1. Colour Buckets:

With two pictures, scan each pixel and count the colours. For example you might have the 'buckets':

white

red

blue

green

black

Every time you find a 'red' pixel, you increment the red counter. Each bucket can be representative of spectrum of colours, the higher resolution the more accurate.

2. Regions of Interest:

Some images may have distinctive segments/regions of interest. These regions probably contrast highly with the rest of the image, and are a good item to search for in your other images to find matches. If you have more than 2 regions of interest, you can measure the distances between them.

Q.3

These are some algorithm in operating system.

Disk scheduling algo:

FCFS Algorithm

SSTF Algorithm

SCAN Algorithm

C-SCAN Algorithm

LOOK Algorithm

C-LOOK Algorithm

For Memory allocation:

Best-fit

Worst-fit

First-fit

Process scheduling algorithm:

FCFS

Round robin

SJF

SRTF

Priority based scheduling algorithms

Q.4

Experience in AOS lab:

Lab hours are really interesting. It motivates me to write and practice more and more code. Because of the lab, logic gets developed and assignments are also good to learn new things.

Q.6

Robot Operating System is a middleware which is collection of software frameworks and a large set of libraries which are required to develop a powerful software. Robots are made to response quick due to all the algorithms used

in Robot Operating System.

Q.7

For desktop present OS are windows 7, windows 8, windows 10, Ubuntu, ChaletOS, SteamOS, Mac OS etc.

For mobile phones present OS are android Pie: Versions 9.0, IOS, Symbian etc.

Roll No.: 19mcmb19

Name: Mohd. Shehwaz

Assignment-2

Q.1 Write a c program read a file from the current directory of user working system and copy all the text into another file named as sampletext.txt to store. List out the number lines of the text and list of words in output1.txt and sorted list of the words in output2.txt file, repeated words list in output3.txt and unique word list in output4.txt.

#include

#include

int main()

{

FILE *fptr1, *fptr2;

char filename[20], c;

int count = 0, wcount = 0;

printf("Enter file name to read");

scanf("%s",filename);

fptr1 = fopen(filename,"r");

printf("Enter file name to write");

scanf("%s",filename);

fptr2=fopen(filename,"w");

c = fgetc(fptr1);

while(c!=EOF)

{

fputc(c,fptr2);

c=fgetc(fptr1);

}

printf("Contents copied to %s", filename);

for(c=getc(fptr2); c! = EOF; c = getc(fptr2))

{

if (c=='\n')

count = count + 1;

}

printf(“Number of lines in file = %d”, count);

for(c=getc(fptr2); c!=EOF; c = getc(fptr2))

{

if (c==' ')

wcount = wcount + 1;

}

printf(“Number of words in file = %d”, wcount);

fclose(fptr1);

fclose(fptr2);

return 0;

}

Q.2 Write a C program Display all the files in user current directory and separate all executable files, program files and object files into three different directories to store separately.

#include

void main()

{

system("ls");

system("mkdir Text-File");

system("mkdir C-Programs");

system("mkdir Images");

system("mv *.txt Text-File");

system("mv *.c C-Programs");

system("mv *.jpg Images");

}

Q.3 Write all your classmates names into a file along with the Reg. Numbers and arrange them in sorted alphabetical order of name in one output and another lost two digit sorted order in another output file. Using (a) linux commands with shell script (b) using C program.

#include

#include

void main()

{

char name[10][10], temp[20];

int i, j, n;

printf(“Enter the number of students”);

scanf(“%d”, &n);

printf(“Enter the name of students”);

for(i=0; i

{

scanf(“%s”, &name[i]);

}

for(i=0; i

{

for(j=i+1; j

{

if(strcmp(name[i], name[j]) > 0)

{

strcpy(temp, name[i]);

strcpy(name[i], name[j]);

strcpy(name[j], temp);

}

}

}

printf(“Sorted names are: \n”);

for(i=0; i

printf(“%s \n”, name[i]);

}

Roll No.: 19mcmb19

Name:

Class Assignment

Write the following given table: What is your understanding level in the context of C programming. Where do you use these header file information in your programs. Give few examples.

: Assertions are statements used to test assumptions made by programmer. For example, we may use assertion to check if pointer returned by malloc() is NULL or not.

: The locale.h header defines the location specific settings, such as date formats and currency symbols.

: stddef.h is a header file in the standard library of the C programming language that defines the macros NULL and offsetof as well as the types ptrdiff_t, wchar_t, and size_t.

: The ctype.h header file of the C Standard Library declares several functions that are useful for testing and mapping characters.

: C Programming allows us to perform mathematical operations through the functions defined in header file. The header file contains various methods for performing mathematical operations such as sqrt(), pow(), ceil(), floor() etc.

: It is standard input output which contains many standard library functions for file input and output, such as printf() & scanf().

: The header file ERRNO.H defines several macros used to define runtime errors. Many of the C library functions assign a value to this variable if an error occurs during function execution.

: setjmp.h is a header defined in the C standard library to provide "non-local jumps": control flow that deviates from the usual subroutine call and return sequence. It uses buffer to remember current position and returns 0.

: stdlib.h is the header of the general purpose standard library of C programming language which includes functions involving memory allocation, process control, conversions and others.

: The float.h header file of the C Standard Library contains a set of various constants related to floating point values.

: signal.h is a header file defined in the C Standard Library to specify how a program handles signals while it executes.

: The string.h header defines various functions for manipulating arrays of characters.

: The macros defined in this header, limits the values of various variable types like char, int and long. These limits specify that a variable cannot store any value beyond these limits, for example an unsigned character can store up to a maximum value of 255.

: It can be used to get the arguments in a function when the number of arguments are not known i.e. variable number of arguments.

: The time library provides the functions required to retrieve the system time, perform time calculations, and output formatted strings that allow the time to be displayed in a variety of common formats.

: To define several macros and functions to use with the three complex arithmetic types float _Complex, double _Complex, and long double _Complex.

: fenv.h is a header file containing various functions and macros for manipulating the floating-point environment.

Write the meaning of following with your understanding using own words:

int execl(const char *path, const char *argv0, ...);

int execvpe(const char *path, char * const *argv, char * const *envp);

int execle(const char *path, const char *argv0

int execvpe(const char *path, char * const *argv, char * const *envp);

int execv(const char *path, char * const *argv);

These functions are useful to replace the current running process with a new process. Using these functions, the created child process does not have to run the same program as the parent process does.

These all functions are used for the same purpose but the syntax of them are bit different.

write your experience on execution of the following programs.

Program 1:

#include

int c;

int main(void)

{

int pid;

extern int c;

c = 10;

if ((pid = fork()) == 0) {

execlp("ls", "ls", "-l", NULL);

printf("Child - c: %d\n", c);

c *= 3;

printf("Exec failed\n");

} else {

c += 3;

printf("Parent - c: %d\n", c);

}

exit(0);

}

Output:

Parent - c: 13

Explanation:

Here, if condition fails because pid is not zero. So, control goes to else part and c becomes 13.

Program 2:

#include

#include

int main(void)

{

int pid[7], status[7];

pid[0] = getpid(); /* PID of Process P0*/

if ((pid[1] = fork()) == 0) { /* P0 creates P1 */

if ((pid[3] = fork()) == 0) { /* P1 creates P3 */

if ((pid[6] = fork()) == 0) { /* P3 creates P6 */

printf("Process P6\n"); /* P6 can run first */

} else {

waitpid(pid[6], &status[6], 0); /* P3 waits for P6 */

printf("Process P3\n");

}

} else {

waitpid(pid[3], &status[3], 0); /* P1 waits for P3 */

printf("Process P1\n");

if ((pid[4] = fork()) == 0) { /* P1 creates P4 */

printf("Process P4\n");

}

}

} else {

waitpid(pid[1], &status[1], 0); /* P0 waits for P1 */

printf("Process P0\n");

if ((pid[2] = fork()) == 0) { /* P0 creates P2 */

if ((pid[5] = fork()) == 0) { /* P2 creates P5 */

printf("Process P5\n");

} else {

waitpid(pid[5], &status[5], 0); /* P2 waits for P5 */

printf("Process P2\n");

}

} else {

waitpid(pid[2], &status[2], 0); /* P0 waits for P2 */

}

}

exit(0);

}

Output:

Process P6

Process P3

Process P1

Process P0

Process P4

Process P5

Process P2

_____________________________________________________________________________________

Roll No.: 19mcmb19

Name:

Course:

Assignment-3

Q.1 Write the details and practice the following command

(a) ps (b) kill (c) nice (d) df (e) free

(a) ps command: The ps (process status) command is used to provide information about the currently running processes, including their process identification numbers (PIDs).

(b) kill command: To terminate processes without having to log out or reboot the computer.

(c) nice command: nice is used to invoke a utility or shell script with a particular CPU priority, thus giving the process more or less CPU time than other processes. It ranges from minus 20 to plus 19 and can take only integer values. A value of minus 20 represents the highest priority level, whereas 19 represents the lowest. The default niceness for processes is inherited from its parent process and is usually 0.

(d) df command: df command displays the amount of disk space available on the file system containing each file name argument.

(e) free command: free command which displays the total amount of free space available along with the amount of memory used and swap memory in the system, and also the buffers used by the kernel.

(i) fg jobname: This command is used to put the mentioned job running in background to foreground.

(ii) top: top command is used to show the Linux processes. It provides a dynamic real-time view of the running system. Usually, this command shows the summary information of the system and the list of processes or threads which are currently managed by the Linux Kernel.

(iii) ps ux, ps PID difference: ps ux gives all information of USER, PID, %CPU, %MEM, VSZ, RSS, TTY, STAT, START, TIME, COMMAND for all the process running in system.

While, ps PID gives information about PID, TTY, STAT, TIME, COMMAND for only specific PID.

(iv) kill PID: Terminates the process without having to log out or reboot the computer.

(v) pidof processname: Gives PID of mentioned processname.

Example:

~$ pidof bash

Output:

1693

(vi) nice -n nice value processname: Gives nice value or priority to given process.

(vii) renice nice value -p PID: Updates the nice value of given PID.

Example:

~$ renice 1 -p 1693

Output:

1693: old priority 0, new priority 1

(viii) df -h:

(ix) free -m:

(x) free -g:

____________________________________________________________________________________

Roll No.: 19mcmb19

Name:

Course:

Assignment-4

Gcc Compilation process:

Step 1 - Pre-processor:

The C preprocessor or cpp is the macro preprocessor for the C and C++ computer programming languages. The preprocessor provides the ability for the inclusion of header files, macro expansions, conditional compilation, and line control.

1). Removing comments

2). Expanding macros

3). Expanding included files

Step 2 - Compiler:

It takes the output of the preprocessor and generates assembly language, an intermediate human readable language, specific to the target processor.

Step 3 - Assembler:

Assembly is the third step of compilation. The assembler will convert the assembly code into pure binary code or machine code (zeros and ones). This code is also known as object code.

Step 4 - Linker:

Linking is the final step of compilation. The linker merges all the object code from multiple modules into a single one. If we are using a function from libraries, linker will link our code with that library function code.

LAB PRACTISE:

1. Explain the difference between mv filea fileb, cp filea fileb and ln filea fileb

Mv file-a file-b

Cp file-a file-b

In file-a file-b

Move file and directory from one place to another

Copy file from one directory to another

Used to create links between two files

Multiple files can be moved together

Multiple files can be copied from one to another directory

It can creates symlinks to directories

This command can over write the content of another directory.

It can forcibly copy the contents to the destination

I can specify the directory in which link is created

Parameters:

-b (Backup)

-I (Prompt before overwriting)

Parameters:

-I (Interactive)

-b (Backup)

-f (Force)

Parameters:

-L ( Logical)

-t (Create target directory)

--r (Relative)

2. Explain in detail .’ls –ll’ command in detail.

In computing, ls command is a command to list computer files in unix and unix-like operating systems. Ls command is used to for all the sort of operations in viewing the types of files, contents in the directory, the date and time of file creation and much more.

Write the program that execute same as the out put of the ls –l command without using ls in C program.

#include

#include

int main(void)

{

struct dirent *de; // Pointer for directory entry

// opendir() returns a pointer of DIR type.

DIR *dr = opendir(".");

if (dr == NULL) // opendir returns NULL if couldn't open directory

{

printf("Could not open current directory" );

return 0;

}

// Refer http://pubs.opengroup.org/onlinepubs/7990989775/xsh/readdir.html

// for readdir()

while ((de = readdir(dr)) != NULL)

printf("%s\n", de->d_name);

closedir(dr);

return 0;

}

3. Explain the following code and write the output of the program.

#include

#include

jmp_buf sjbuf;

int main(){

int onintr();

if(signal(SIGINT, SIG_IGN)!=SIG_IGN)

signal(SIGINT,onintr);

setjmp(sjbuf);

for(;;){

}

return 0;

}

onintr(){

signal(SIGINT,onintr);

printf("\nInterrupt\n");

//longjmp(sjbuf,0);

}

Explanation:

The signal.h header file works with checking the signal given to the OS or any interrupt made in between the program execution. The header file setjmp.h is a non-local jump that deviates from the usual subroutine call and return sequence. The function setjmp() creates a buffer and initializes it for a jump. If the return is from direct invocation, setjmp returns 0. If the return is from a call to longjmp, it returns non Zero value. SIGINT is used as a parameter to check for signal interrupt and SIG_IGN is used as a parameter to check for signal ignored status. The function Onintr is used to check for any user input and based upon the user input it prints message as Interrupt.

4. List of Signals used in C program to communicate Operating System.

#include

char *tempfile = "temp.XXXXXX";

main(){

extern onintr();

if(signal(SIGINT,SIG_IGN)!=SIG_IGN)

signal(SIGINT,onintr);

printf("Making file \n");

mktemp(tempfile);

exit(0);

}

onintr(){

unlink(tempfile);

exit(1);

}

The terms used to communicate with operating systems are:

1. onintr()

2. SIGINT

3. SIG_IGN

4. mktmp()

5. Unlink

5. Explain and execute the following program:

#include

system(s)

char *s;

{

int status,pid,w,tty;

int(*istat)(),(*qstat)();

if(pid==0){

execlp("sh","sh","-c",s,(char*) 0);

exit(127);

}

istat=signal(SIGINT,SIG_IGN);

qstat=signal(SIGQUIT,SIG_IGN);

while((w=wait(&status))!=pid && w!=-1)

if(w==-1)

status=-1;

signal(SIGINT,istat);

signla(SIGQUIT,qstat);

return status;

}

6. Explain and execute the following program:

#include

#include

int pid;

char *progname;

main(argc,argv)

int argc;

char *argv[];

{

int sec=10, status,onalarm();

progname=argv[0];

if(argc>1 && argv[1][0]=='-'){

sec=atoi(&argv[1][1]);

argc--;

argv++;

}

if(argc<2)

error("Usage: %s[-10] commnad",progname);

if((pid=fork())==0){

execvp(argv[1],&argv[1]);

error("Couldnt start %s", argv[1]);

}

signal(SIGALRM,onalarm);

alarm(sec);

if(wait(&status)==-1 || (status & 0177)!=0)

error(" %s killed",argv[1]);

exit((status>>8)&0377);

}

onalarm(){

kill(pid,SIGKILL)

}

7. Can you infere how sleep is implemented? Under what circumstance if any could sleep and alarm interface each other? Explain with good example program.

pseudo code for my implementation of sleep()

Signal handler for SIGALRM

void sig_alrm(int sig)

{

/* do some stuff */

}

// My_sleep()

unsigned int my_sleep(unsigned int seconds)

{

// Set signal handler for SIGALRM

...

...

alarm(seconds);

...

pause();

...

...

}

#include

#include

#include

static jmp_buf env;

static void sig_alarm_handler(int sig)

{

longjmp(env, 1);

}

unsigned int Mysleep2(unsigned int seconds)

{

if (signal(SIGALRM, sig_alarm) == SIG_ERR)

return(seconds);

if (setjmp(env) == 0) {

alarm(seconds); /* start the timer */

pause(); /* suspend the process until a signal is caught */

}

return(alarm(0)); /* turn off timer, return un-slept time */

}

The alarm goes off before the function pause() is called then longjmp ensured that process did not hang forever. We call Mysleep2() for n number of seconds and in between these n seconds, another signal (say generated through CTRL+C by user) is occurs and is being handled by its handler. Then as soon as the n second alarm expires, the call to longjmp() from handler of alarm signal will cause the handler of SIGINT(generated by user) to abort abruptly. 

#include

#include

volatile int breakflag = 3;

void handle(int sig) {

printf("Hello\n");

--breakflag;

alarm(1);

}

int main() {

signal(SIGALRM, handle);

alarm(1);

while(breakflag) { sleep(1); }

printf("done\n");

return 0;

}

OUTPUT:

8. What kind of modification needed in the following program to execute correctly, what is output you get explain how and why in detail.

$ cat init.c

#include

#include

#include

extern double Log(), Log10(), Exp(), Sqrt(), integer();

static struct{

char *name;

double cval;

} consts[]={

"PI", 3.1415,

"E", 2.71828,

"GAMMA", 0.57721,

"DEG", 57.29577,

"PHI", 1.6180,

0, 0

};

static struct{

char *name;

double (*func)();

}builtins[]={

"sin", sin,

"cos", cos,

"atan", atan,

"log", Log,

"log10",Log10,

"exp", Exp,

"sqrt", Sqrt,

"int", integer,

"abs", fabs,

0,0

};

init(){

int i;

Symbol *s;

for(i=0;consts[i].name;i++)

install(consts[i].name, s,consts[i].cval);

for(i=0;builtins[i].name;i++){

s=install(builtins[i].name,BLTIN,0.0);

s->u.ptr=builtins[i].func;

}

}

}

9. Explain the following piece of code, how do you execute the following code

Proc fib(){

a=0;

b=1;

while(b< $1){

print b;

c=b;

b=a+b;

a=c;

}

Print “\n”

}

Explanation:

Above code is to print fibbonacci series. First 0 and 1 will be printed. Then after each iteration sum to two previous terms is calculated and output is printed.

Example: 0 1 1 2 3 5 8 13 …

10.#include

#include

/*

* usually defines NSIG to include signal number 0.

*/

#define SIGBAD(signo) ((signo) <= 0 || (signo) >= NSIG)

int sigaddset(sigset_t *set, int signo)

{

if (SIGBAD(signo)) {

errno = EINVAL;

return(-1);

}

*set |= 1 << (signo - 1); /* turn bit on */

return(0);

}

int

sigdelset(sigset_t *set, int signo)

{

if (SIGBAD(signo)) {

errno = EINVAL;

return(-1);

}

*set &= ˜(1 << (signo - 1)); /* turn bit off */

return(0);

}

int sigismember(const sigset_t *set, int signo)

{

if (SIGBAD(signo)) {

errno = EINVAL;

return(-1);

}

return((*set & (1 << (signo - 1))) != 0);

}

#include

#include

pthread_t ntid;

void printids(const char *s)

{

pid_t pid;

pthread_t tid;

pid = getpid();

tid = pthread_self();

printf("%s pid %lu tid %lu (0x%lx)\n", s, (unsigned long)pid,

(unsigned long)tid, (unsigned long)tid);

}

void * thr_fn(void *arg)

{

printids("new thread: ");

return((void *)0);

}

int main(void)

{

int err;

err = pthread_create(&ntid, NULL, thr_fn, NULL);

if (err != 0)

err_exit(err, "can’t create thread");

printids("main thread:");

sleep(1);

exit(0);

}

____________________________________________________________________________________

Roll No.: 19mcmb19

Name:

Course:

Program to implement fork()

#include #include #include

void main(){

int i; pid_t pid;

fork(); //creating child process pid=getpid(); printf("%d",pid);

}

Synopsis:

The process is always a single process if it has no any sub process running the particular process. When an program is executed, it is always executed as a single part. The function FORK() is used to create another sub process named as child process. The main function of child process is to follow the execution of parent process and proceed to execute the functions executed by parent process. Child process is always created by using fork function and the process are created on the tree basis. The above program is taking pid as an integer and creating a child process. Whenever the child process is created pid is assigned a value 0. In the above program the process id is printed two times, One by the child process and another by the parent process.

Basic program to implement a child parent program

#include

int main()

{

int pid; pid=fork(); fork(); printf("Hello");

printf("Parent\n"); printf("Child\n"); return 0;

}

Synopsis:

The above program is using fork() to create a child process. In the above program there are two child processes. The output of above program is

“hello Parent Child” “hello Parent Child”

This is because, the parent process and child process are executing the same piece of sentence assigned by the fork() function.

Program to print parent and child process on the basis of process id.

#include #include #include

#define MAX_COUNT 10

#define BUF_SIZE 100

void main(){

int i; pid_t pid;

char buf[BUF_SIZE];

}

Synopsis:

fork(); //creating child process pid=getpid(); for(i=1;i<=MAX_COUNT;i++){

sprintf(buf,"This line is from pid %d, value=%d\n",pid,i); write(1,buf,strlen(buf));

}

First of all I have taken integer variable pid_t that is used for taking process id from the operating system. I have taken function fork() so as to divide the program and execute in child and parent basis. The loop continues until the max element is achieved by the iterator. The parent process executes process id and the write function is responsible for updating the buffer. The parent process Executes process along with values and child process executes with process id of child and its values.

Implementing child and parent process using different functions

#include #include #define MAX 20

void ChildProcess(void); void ParentProcess(void);

void main()

{

pid_t pid; pid=fork(); if(pid==0)

ChildProcess();

else

}

ParentProcess();

void ChildProcess(void){ int i;

for(i=1;i<=MAX;i++){

printf("This line is from child,value=%d\n",i); printf("**Child Process is done**");

}

}

void ParentProcess(void){ int i;

for(i=1;i<=MAX;i++){

printf("This line is from parent,value=%d\n",i); printf("ParentProcess is done");

}

Synopsis:

The above program represents again child process and parent process. There are two functions which are executing parent process and child process. The function for child process id ChildProcess(void) and the function for parent process id ParentProcess(void). Integer pid_t is initialized which is responsible for getting return value of fork. When fork is created i.e a new child process id created in the system. The condition is applied when pid==0, the child process executes and when pid!=0 the parent process need to get executed. In this way the child process executes its value and again pid is changed i.e !=0 which is responsible for executing parent process and the associated values.

____________________________________________________________________________________

Roll No.: 19mcmb19

Name:

Course:

Q.1 What is a system call. Write programs using the following system calls of UNIX operating system: fork, exec, getpid, exit, wait, close, stat, opendir, readdir.

Program to implement fork()

#include #include #include

void main(){

int i; pid_t pid;

fork(); //creating child process pid=getpid(); printf("%d",pid);

}

Synopsis:

The process is always a single process if it has no any sub process running the particular process. When an program is executed, it is always executed as a single part. The function FORK() is used to create another sub process named as child process. The main function of child process is to follow the execution of parent process and proceed to execute the functions executed by parent process. Child process is always created by using fork function and the process are created on the tree basis. The above program is taking pid as an integer and creating a child process. Whenever the child process is created pid is assigned a value 0. In the above program the process id is printed two times, One by the child process and another by the parent process.

Basic program to implement a child parent program

#include

int main()

{

int pid; pid=fork(); fork(); printf("Hello");

printf("Parent\n"); printf("Child\n"); return 0;

}

Synopsis:

The above program is using fork() to create a child process. In the above program there are two child processes. The output of above program is

“hello Parent Child” “hello Parent Child”

This is because, the parent process and child process are executing the same piece of sentence assigned by the fork() function.

Program to print parent and child process on the basis of process id.

#include #include #include

#define MAX_COUNT 10

#define BUF_SIZE 100

void main(){

int i; pid_t pid;

char buf[BUF_SIZE];

}

Synopsis:

fork(); //creating child process pid=getpid(); for(i=1;i<=MAX_COUNT;i++){

sprintf(buf,

}

First of all I have taken integer variable pid_t that is used for taking process id from the operating system. I have taken function fork() so as to divide the program and execute in child and parent basis. The loop continues until the max element is achieved by the iterator. The parent process executes process id and the write function is responsible for updating the buffer. The parent process Executes process along with values and child process executes with process id of child and its values.

Implementing child and parent process using different functions

#include #include #define MAX 20

void ChildProcess(void); void ParentProcess(void);

void main()

{

pid_t pid; pid=fork(); if(pid==0)

ChildProcess();

else

}

ParentProcess();

void ChildProcess(void){ int i;

for(i=1;i<=MAX;i++){

printf("This line is from child,value=%d\n",i); printf("**Child Process is done**");

}

}

void ParentProcess(void){ int i;

for(i=1;i<=MAX;i++){

printf("This line is from parent,value=%d\n",i); printf("ParentProcess is done");

}

Synopsis:

The above program represents again child process and parent process. There are two functions which are executing parent process and child process. The function for child process id ChildProcess(void) and the function for parent process id ParentProcess(void). Integer pid_t is initialized which is responsible for getting return value of fork. When fork is created i.e a new child process id created in the system. The condition is applied when pid==0, the child process executes and when pid!=0 the parent process need to get executed. In this way the child process executes its value and again pid is changed i.e !=0 which is responsible for executing parent process and the associated values.

Q.2 What are purpose of I/O system calls. Write programs using the I/O System calls of UNIX operating system (open, read, write, etc.).

Open():

Used to open an existing file for reading/writing or to create a new

file. Returns a file descriptor whose value is negative on error.

The mandatory flags are O_RDONLY, O_WRONLY and O_RDWR

Optional flags include O_APPEND, O_CREAT, O_TRUNC, etc

The flags are ORed.

The mode specifies permissions for the file.

Creat():

Used to create a new file and open it for writing.

It is replaced with open() with flags O_WRONLY|O_CREAT | O_TRUNC

Read():

Reads no. of bytes from the file or from the terminal. If read is successful, it returns no. of bytes read. The file offset is incremented by no. of bytes

read. If end-of-file is encountered, it returns 0.

Write():

Writes no. of bytes onto the file. After a successful write, file's offset is incremented by the no. of bytes written. If any error due to insufficient storage space, write fails.

Close():

Closes a opened file. When process terminates, files associated with the process are automatically closed.

Program

/* File creation - fcreate.c */

#include

#include

#include

#include

main(int argc, char *argv[])

{

int fd, n, len;

char buf[100];

if (argc != 2)

{

printf("Usage: ./a.out \n"); exit(-1);

}

fd = open(argv[1], O_WRONLY|O_CREAT|O_TRUNC, 0644); if(fd < 0)

{

printf("File creation problem\n"); exit(-1);

}

printf("Press Ctrl+D at end in a new line:\n");

while((n = read(0, buf, sizeof(buf))) > 0)

{

len = strlen(buf);

write(fd, buf, len);

}

close(fd);

Output

$ gcc fcreate.c

$ ./a.out

hello File I/O

Q.3 How many process need to run to execute grep, awk and ls commands. Write C programs to simulate UNIX commands like ls,grep, etc.

Program

/* ls command simulation - list.c */

#include

#include

main()

{

struct dirent

**namelist; int n,i;

char pathname[100];

getcwd(pathname);

n = scandir(pathname, &namelist, 0, alphasort);

if(n < 0)

printf("Error\n");

else

for(i=0; i

>d_name[0] != '.')

printf("%-20s", namelist[i]->d_name);

}

Output:

$ gcc list.c -o list

$ ./list

cmdpipe.c consumer.c

a.out

dirlist.c ex6a.c ex6b.c

ex6c.c ex6d.c exec.c

fappend.c fcfs.c fcreate.c

fork.c fread.c hello

list list.c pri.c

producer.c rr.c simls.c

sjf.c stat.c wait.c

grep command

To simulate grep command using UNIX system call.

Algorithm

1. Get filename and search string as command-line argument.

2. Open the file in read-only mode using open system call.

3. If file does not exist, then stop.

4. Let length of the search string be n.

5. Read line-by-line until end-of-file

a. Check to find out the occurrence of the search string in a line by

examining characters in the range 1–n, 2–n+1, etc.

b. If search string exists, then print the line.

6. Close the file using close system call.

7. Stop.

Result

Thus the program simulates grep command by listing lines containing the search text.

Program

/* grep command simulation - mygrep.c */

#include

#include

#include

main(int argc,char *argv[])

{

FILE *fd;

char str[100];

char c;

int i, flag, j, m,

k; char temp[30];

if(argc != 3)

{

printf("Usage: gcc mygrep.c –o mygrep\n");

printf("Usage: ./mygrep

\n"); exit(-1);

}

fd = fopen(argv[2],"r");

if(fd == NULL)

{

printf("%s is not

exist\n",argv[2]); exit(-1);

}

while(!feof(fd))

{

i = 0;

while(1)

{

c = fgetc(fd);

if(feof(fd))

{

str[i++] =

'\0'; break;

}

if(c == '\n')

{

str[i++] =

'\0'; break;

}

str[i++] = c;

}

if(strlen(str) >= strlen(argv[1]))

for(k=0; k<=strlen(str)-strlen(argv[1]); k++)

{

for(m=0; m

m++) temp[m] = str[k+m];

temp[m] = '\0';

if(strcmp(temp,argv[1]) == 0)

{

printf("%s\n",str);

break;

}

}

}

}

Output:

$ gcc mygrep.c -o mygrep

$ ./mygrep printf dirlist.c

printf("Usage: ./a.out \n");

printf("%s\n", dptr->d _name);

Q.4 What is a scheduler. How many types of scheduler available . Given the list of processes, their CPU burst times and arrival times. Display/print the Gantt chart for FCFS and SJF. For each of the scheduling policies, compute and print the average waiting time and average turnaround time.

PROCESS SCHEDULING:

CPU scheduling is used in multiprogrammed operating systems.

By switching CPU among processes, efficiency of the system can be improved.

Some scheduling algorithms are FCFS, SJF, Priority, Round-Robin, etc.

Gantt chart provides a way of visualizing CPU scheduling and

enables to understand better.

First Come First Serve (FCFS):

Process that comes first is processed first

FCFS scheduling is non-preemptive

Not efficient as it results in long average waiting time.

Can result in starvation, if processes at beginning of the queue have

long bursts.

Shortest Job First (SJF):

Process that requires smallest burst time is processed

first. SJF can be preemptive or non–preemptive

When two processes require same amount of CPU utilization, FCFS is

used to break the tie.

Generally efficient as it results in minimal average waiting time.

Can result in starvation, since long critical processes may not be processed.

Burst time for process P1 (in ms) : 10

Burst time for process P2 (in ms) : 4

Burst time for process P3 (in ms) : 11

Burst time for process P4 (in ms) : 6

GANTT Chart

| P1 | P2 | P3 | P4 |

-----------------------

0 10 14 25 31

Average waiting time : 12.25ms

Average turn around time : 20.00ms

FCFS Scheduling

ProcessB-TimeT-TimeW-Time

P110100

P241410

P3112514

P463125

GANTT Chart

| P1 | P2 | P3 | P4 |

-----------------------

0 10 14 25 31

Average waiting time : 12.25ms

Average turn around time : 20.00ms

Q.5 Given the list of processes, their CPU burst times and arrival times. Display/print the Gantt chart for Priority and Round robin. For each of the scheduling policies, compute and print the average waiting time and average turnaround time.

Priority:

Process that has higher priority is processed first.

Prioirty can be preemptive or non–preemptive

When two processes have same priority, FCFS is used to break the tie.

Can result in starvation, since low priority processes may not be processed.

Round Robin:

All processes are processed one by one as they have arrived, but in

rounds. Each process cannot take more than the time slice per round.

Round robin is a fair preemptive scheduling algorithm.

A process that is yet to complete in a round is preempted after the

time slice and put at the end of the queue.

When a process is completely processed, it is removed from the queue.

Burst time for process P1

Priority for process P1 : 3

Burst time for process P2 (in ms) : 7

Priority for process P2 : 1

Burst time for process P3 (in ms) : 6

Priority for process P3 : 3

Burst time for process P4 (in ms) : 13

Priority for process P4 : 4

Burst time for process P5 (in ms) : 5

Priority for process P5 : 2

Priority Scheduling:

ProcessB-TimePriorityT-TimeW-Time

P27170

P552127

P11032212

P3632822

P41344128

GANTT Chart

| P2 | P5 | P1 | P3 | P4 |

----------------------------

0 7 12 22 28 41

Average waiting time : 13.80ms

Average turn around time : 22.00ms

Round-robin:

time for process P1 : 10 Burst

time for process P2 : 29 Burst

time for process P3 : 3 Burst

time for process P4 : 7 Burst

time for process P5 : 12

Enter the time slice (in ms) : 10

Round Robin Scheduling

P1 | P2 | P3 | P4 | P5 | P2 | P5 | P2 |

------------------------------------------

0 10 20 23 30 40 50 52 61

ProcessBurstTrndWait

P110100

P2296132

P332320

P473023

P5125240

Average waiting time : 23.00 ms

Average turn around time : 35.20 ms

Q.6 Develop application using Inter-Process Communication (using shared memory, pipes or message queues).

Inter-Process communication (IPC), is the mechanism whereby one process can communicate with another process, i.e exchange data.

IPC in linux can be implemented using pipe, shared memory, message queue, semaphore, signal or sockets.

Pipe:

Pipes are unidirectional byte streams which connect the standard output from one process into the standard input of another process. A pipe is created using the system call pipe that returns a pair of file descriptors. The descriptor pfd[0] is used for reading and pfd[1] is used for writing. Can be used only between parent and child processes.

Shared memory:

Two or more processes share a single chunk of memory to communicate randomly. Semaphores are generally used to avoid race condition amongst processes. Fastest amongst all IPCs as it does not require any system call.

It avoids copying data unnecessarily.

Message Queue:

A message queue is a linked list of messages stored within the kernel A message queue is identified by a unique identifier. Every message has a positive long integer type field, a non-negative length, and the actual data bytes.

The messages need not be fetched on FCFS basis. It could be based on type field.

Semaphores. A semaphore is a counter used to synchronize access to a shared data amongst multiple processes. To obtain a shared resource, the process should:

Test the semaphore that controls the resource.

If value is positive, it gains access and decrements value of semaphore.

If value is zero, the process goes to sleep and awakes when value is > 0.

When a process relinquishes resource, it increments the value of semaphore by 1.

Producer-Consumer problem:

A producer process produces information to be consumed by a consumer process A producer can produce one item while the consumer is consuming another one. With bounded-buffer size, consumer must wait if buffer is empty, whereas

producer must wait if buffer is full. The buffer can be implemented using any IPC facility.

Program to exchange message between server and client using message queue.

/* Server chat process - srvmsg.c */

#include

#include

#include

#include

#include

#include

struct mesgq

{

long type;

char

text[200]; } mq;

main()

{

int msqid, len;

key_t key = 2013;

if((msqid = msgget(key, 0644|IPC_CREAT)) == -1)

{

perror("msgget");

exit(1);

}

printf("Enter text, ^D to

quit:\n"); mq.type = 1;

while(fgets(mq.text, sizeof(mq.text), stdin) != NULL)

{

len = strlen(mq.text);

if (mq.text[len-1] == '\n')

mq.text[len-1] = '\0';

msgsnd(msqid, &mq, len+1, 0);

msgrcv(msqid, &mq, sizeof(mq.text), 0, 0);

printf("From Client: \"%s\"\n", mq.text);

}

msgctl(msqid, IPC_RMID, NULL);

}

Client

/* Client chat process - climsg.c */

#include

#include

#include

#include

#include

#include

struct mesgq

{

long type;

char

text[200]; } mq;

main()

{

int msqid, len;

key_t key = 2013;

if ((msqid = msgget(key, 0644)) == -1)

{

printf("Server not

active\n"); exit(1);

}

printf("Client ready :\n");

while (msgrcv(msqid, &mq, sizeof(mq.text), 0, 0) != -1)

{

printf("From Server: \"%s\"\n", mq.text);

fgets(mq.text, sizeof(mq.text),

stdin); len = strlen(mq.text);

if (mq.text[len-1] == '\n')

mq.text[len-1] = '\0';

msgsnd(msqid, &mq, len+1, 0);

}

printf("Server Disconnected\n");

}

Output:

Server

$ gcc srvmsg.c -o srvmsg

$ ./srvmsg

Enter text, to quit: hi

From Client:

"hello" Where r u?

From Client: "I'm where I am" bye

From Client:

"ok"

Client

$ gcc climsg.c -o climsg

$ ./climsg

Client ready:

From Server:

"hi" hello

From Server: "Where r u?" I'm where i am

From Server:

"bye" ok

Server Disconnected

7. Implement the Producer-Consumer problem using semaphores (using UNIX system calls)

Program:

#include

#include

#include

#include

#include

#include

#define N 5

#define BUFSIZE 1

#define PERMS 0666

int *buffer;

int nextp = 0, nextc = 0;

/* semaphore variables */

int mutex, full, empty;

void producer()

{

int data;

if(nextp == N)

nextp = 0;

printf("Enter data for producer to produce :");

scanf("%d",(buffer + nextp));

nextp++;

}

void consumer()

{

int g;

if(nextc == N)

nextc = 0;

g = *(buffer + nextc++);

printf("\nConsumer consumes data %d", g);

}

void sem_op(int id, int value)

{

struct sembuf

op; int v;

op.sem_num = 0;

op.sem_op = value;

op.sem_flg = SEM_UNDO;

if((v = semop(id, &op, 1)) < 0)

printf("\nError executing semop instruction");

}

void sem_create(int semid, int initval)

{

int semval;

union semun

{

int val;

struct semid_ds *buf;

unsigned short *array;

} s;

s.val = initval;

if((semval = semctl(semid, 0, SETVAL, s)) < 0)

printf("\nError in executing semctl");

}

void sem_wait(int id)

{

int value = -1;

sem_op(id, value);

}

void sem_signal(int id)

{

int value = 1;

sem_op(id, value);

}

main()

{

int shmid, i;

pid_t pid;

if((shmid = shmget(1000, BUFSIZE, IPC_CREAT|PERMS)) < 0)

{

printf("\nUnable to create shared

memory"); return;

}

if((buffer = (int*)shmat(shmid, (char*)0, 0)) == (int*)-1)

{

printf("\nShared memory allocation

error\n"); exit(1);

}

if((mutex = semget(IPC_PRIVATE, 1, PERMS|IPC_CREAT)) == -1)

{

printf("\nCan't create mutex

semaphore"); exit(1);

}

if((empty = semget(IPC_PRIVATE, 1, PERMS|IPC_CREAT)) == -1)

{

printf("\nCan't create empty

semaphore"); exit(1);

}

if((full = semget(IPC_PRIVATE, 1, PERMS|IPC_CREAT)) == -1)

{

printf("\nCan't create full

semaphore"); exit(1);

}

sem_create(mutex, 1);

sem_create(empty, N);

sem_create(full, 0);

if((pid = fork()) < 0)

{

printf("\nError in process

creation"); exit(1);

}

else if(pid > 0)

{

for(i=0; i

{

sem_wait(empty);

sem_wait(mutex);

producer();

sem_signal(mutex);

sem_signal(full);

}

}

else if(pid == 0)

{

for(i=0; i

{

sem_wait(full);

sem_wait(mutex);

consumer();

sem_signal(mutex);

sem_signal(empty);

}

printf("\n");

}

}

Output:

Enter data for producer to produce : 5

Enter data for producer to produce : 8

Consumer consumes data 5

Enter data for producer to produce : 4

Consumer consumes data 8

Enter data for producer to produce : 2

Consumer consumes data 4

Enter data for producer to produce : 9

Consumer consumes data 2

Consumer consumes data 9

Roll No.: 19mcmb19

Name:

Course:

Assignment-6

1. Describe the general strategy to define priority structure in a RTOS.

A real time operating system (RTOS) is a system that allows completing the task in predictable timing constraints. RTOS have several characteristics such as multitasking, priority, predictable task, synchronization, priority inheritance and known behavior. The major problem of RTOS is critical section problem. In RTOS it is difficult to make available resources to all processes in deterministic and predefined time constraint (deadline) according to their priorities. This paper deals with the study of priority-based scheduling in RTOS with available semaphore-based solutions of critical section problem. It also has two new semaphore-based approaches for task synchronization in RTOS. A real time operating system (RTOS) is an operating system (OS) intended to serve real-time application requests? It supports applications that must meet deadlines in addition to providing logically correct results. The main property of a real-time system is feasibility. It is the guarantees that task always meet their deadlines, when scheduled according to the chosen policy.

2. How does one determine “schedulability” in RTOS? In which context it is required?

The term scheduling analysis in real-time computing includes the analysis and testing of the scheduler system and the algorithms used in real-time applications. In computer science, real-time scheduling Analysis is the evaluation, testing and verification of the scheduling system and the algorithms used in real-time operations. For critical operations, a real-time system must be tested and verified for performance. In computer science, testing and verification is also known as model checking. A real-time scheduling System is composed of the scheduler, clock and the processing hardware elements. In a real-time system, a process or task has schedulability; tasks are accepted by a real-time system and completed as specified by the task deadline depending on the characteristic of the scheduling algorithm.[1] Modeling and evaluation of a real-time scheduling system concern is on the analysis of the algorithm capability to meet a process deadline. A deadline is defined as the time required for a task to be processed. For example, in a real-time scheduling algorithm a deadline could be set to five nano-seconds. In a critical operation the task must be processed in the time specified by the deadline (i.e. five nano-seconds). A task in a real-time system must be completed “neither too early nor too late;..”.[2] A system is said to be unschedulable when tasks cannot meet the specified deadlines.[3] A task can be classified as either a periodic or aperiodic process

3. List all the files in the current directory with names not exceeding two characters.

4. What is Kernal? Describe the steps involved in booting.

A kernel is the core component of an operating system. Using interprocess communication and system calls, it acts as a bridge between applications and the data processing performed at the hardware level. When an operating system is loaded into memory, the kernel loads first and remains in memory until the operating system is shut down again. The kernel is responsible for low-level tasks such as disk management, task management and memory management.

A computer kernel interfaces between the three major computer hardware components, providing services between the application/ user interface and the CPU, memory and other hardware I/O devices. The kernel provides and manages computer resources allowing other programs to run and use these resources. The kernel also sets up memory address space for applications, loads files with application code into memory, sets up the execution stack for programs and branches out to particular locations inside programs for execution.

There are five types of kernels:

1. Monolithic Kernels

2. Microkernels

3. Hybrid Kernels

4. Nano Kernels

5. Exo Kernels

Steps involved in booting:

1. BIOS

a. POST

b. Boot sequence

2. MBR

a. Components of MBR

b. Partition Table

3. Hard disk boot

a. Volume Boot record

b. Extended partition and Logical Partition.

c. Boot Loader

4. Operating System

5. Describe how the user and kernel space is divided and used in UNIX operating system.

The user space, which is a set of locations where normal user processes run (i.e everything other than the kernel). The role of the kernel is to manage applications running in this space from messing with each other, and the machine.

The kernel space, which is the location where the code of the kernel is stored, and executes under.

Processes running under the user space have access only to a limited part of memory, whereas the kernel has access to all of the memory. Processes running in user space also don't have access to the kernel space. User space processes can only access a small part of the kernel via an interface exposed by the kernel - the system calls. If a process performs a system call, a software interrupt is sent to the kernel, which then dispatches the appropriate interrupt handler and continues its work after the handler has finished. Kernel space code has the property to run in "kernel mode", which (in your typical desktop -x86- computer) is what you call code that executes under ring 0. Typically in x86 architecture, there are 4 rings of protection. Ring 0 (kernel mode), Ring 1 (may be used by virtual machine hypervisors or drivers), Ring 2 (may be used by drivers, I am not so sure about that though). Ring 3 is what typical applications run under. It is the least privileged ring, and applications running on it have access to a subset of the processor's instructions. Ring 0 (kernel space) is the most privileged ring, and has access to all of the machine's instructions. For example to this, a "plain" application (like a browser) can not use x86 assembly instructions lgdt to load the global descriptor table or hlt to halt a processor.

6. What are system calls? Describe any two system calls and the action take calls are made.

System call is properly explained in term of kernel mode and user mode of a CPU. Every modern operating system consists of these two modes.

Kernel Mode

When CPU is in kernel mode, the code being executed can access any memory address and any hardware resource.

Hence kernel mode is a very privileged and powerful mode.

If a program crashes in kernel mode, the entire system will be halted.

User Mode

When CPU is in user mode, the programs don't have direct access to memory and hardware resources.

In user mode, if any program crashes, only that particular program is halted.

That means the system will be in a safe state even if a program in user mode crashes.

Hence, most programs in an OS run in user mode

When a program in user mode requires access to RAM or a hardware resource, it must ask the kernel to provide access to that resource. This is done via something called a system call.

When a program makes a system call, the mode is switched from user mode to kernel mode. This is called a context switch. Then the kernel provides the resource which the program requested. After that, another context switch happens which results in change of mode from kernel mode back to user mode.

Generally, system calls are made by the user level programs in the following situations:

Creating, opening, closing and deleting files in the file system.

Creating and managing new processes.

Creating a connection in the network, sending and receiving packets.

Requesting access to a hardware device, like a mouse or a printer.

Some of the system calls are:

Fork ( ):

The fork() system call is used to create processes. When a process (a program in execution) makes a fork() call, an exact copy of the process is created. Now there are two processes, one being the parent process and the other being the child process.

The process which called the fork() call is the parent process and the process which is created newly is called the child process. The child process will be exactly the same as the parent. Note that the process state of the parent i.e., the address space, variables, open files etc. is copied into the child process. This means that the parent and child processes have identical but physically different address spaces. The change of values in parent process doesn't affect the child and vice versa is true too.

Exec ( ):

The exec() system call is also used to create processes. But there is one big difference between fork() and exec() calls. The exec() call creates a new process while preserving the parent process. But, an exec() call replaces the address space, text segment, data segment etc. of the current process with the new process.

It means, after an exec() call, only the new process exists. The process which made the system call, wouldn't exist.

7. How is the system calls used in application programs?

The most common way to invoke system calls is via the standard-library wrappers for the same. So for example, read(fc,buf,BUFLEN) compiles to assembly call read, and in amd64 object code that would be e8 00 00 00 00 (where the zeroes are covered by a symbol-table entry for read). Build a table of all the syscalls used by the standard library, read through the object code looking for calls to the standard library, use that to look up the possible syscalls.

However, a program could also use syscall directly. The first argument is a syscall-number. It could read that number from a file, a command-line argument, standard input, a network socket, or do some lengthy computation (attempt to factor a prime number, sequentially search for input that produces a certain SHA2 hash, run a universal Turing machine until it halts, etc.) to generate it.

A program could also use dlopen/ dlsym to call a standard-library function, and it could get the name to call from a file, a command-line argument, standard input, a network socket, or after some lengthy computation. In fact, the standard-library function it might look up could be syscall itself. These are the system calls used in various application programs.

1. Process control

create process (for example, fork on Unix-like systems, or NtCreateProcess in the Windows NT Native API)

terminate process

load, execute

get/set process attributes

wait for time, wait event, signal event

allocate and free memory

2. File management

create file, delete file

open, close

read, write, reposition

get/set file attributes

3. Device management

request device, release device

read, write, reposition

get/set device attributes

logically attach or detach devices

4. Information maintenance

get/set time or date

get/set system data

get/set process, file, or device attributes

5. Communication

create, delete communication connection

send, receive messages

transfer status information

attach or detach remote devices

6. Protection

get/set file permissions

8. Describe the role of a scheduler.

Schedulers in Operating System are the process which decides which task and process should be accessed and run at what time by the system resources. It is required to maintain the multi tasking capabilities of a computer and to keep its performance at the highest level by scheduling the process according to their preferences and need. The Schedulers in Operating System are the algorithms which help in the system optimisation for maximum performance. Lets have a look at different types of questions asked over this process of operating system. Roles of scheduler are described below.

Long term Scheduler: Long term scheduler are responsible for transferring a process to the ready queue and making it ready for CPU assignment. Since processes are not rapidly created therefore long term scheduler operate less frequently.

Middle term scheduler: Sometimes when a process is waiting for CPU assignment and a process of higher priority arrives then the process in the ready queue is swapped out in a backing store and the higher priority process is swapped in and is assigned a CPU immediately.

Short term scheduler:

Short term scheduler selects a process from the ready queue according to the type of scheduling implemented by the operating system. After selecting a process from the queue it assigns it to the CPU. Since CPU rapidly switches from one process to another therefore short term scheduler operates more frequently.

9. Explain user interface and process management in UNIX.

Unix has user interfaces called shells that present a user interface more flexible and powerful than the standard operating system text-based interface. Programs such as the Korn Shell and the C Shell are text-based interfaces that add important utilities, but their main purpose is to make it easier for the user to manipulate the functions of the operating system. There are also graphical user interfaces, such as X-Windows and Gnome, that make Unix and Linux more like Windows and Macintosh computers from the user's point of view.

It's important to remember that in all of these examples, the user interface is a program or set of programs that sits as a layer above the operating system itself. The same thing is true, with somewhat different mechanisms, of both Windows and Macintosh operating systems. The core operating-system functions -- the management of the computer system -- lie in the kernel of the operating system. The display manager is separate, though it may be tied tightly to the kernel beneath. The ties between the operating-system kernel and the user interface, utilities and other software define many of the differences in operating systems today, and will further define them in the future.

Process Management in Unix:

UNIX uses two categories of processes: system processes and user processes. System processes run in kernel mode and execute operating system code to perform administrative and housekeeping functions, such as allocation of memory and process swapping. User processes operate in user mode to execute user programs and utilities and in kernel mode to execute instructions that belong to the kernel. A user process enters kernel mode by issuing a system call, when an exception (fault) is generated, or when an interrupt occurs.

Process Creation in UNIX is made by means of the kernel system call, fork( ), aims to take advantage of these child process to complete specific subtasks (or delivered). When a process issues a fork request, the operating system performs the following functions:

It allocates a slot in the process table for the new process.

It assigns a unique process ID to the child process.

It makes a copy of the process image of the parent, with the exception of any shared memory.

It increments counters for any files owned by the parent, to reflect that an additional process now also owns those files.

It assigns the child process to the Ready to Run state.

It returns the ID number of the child to the parent process, and a 0 value to the child process.

All of this work is accomplished in kernel mode in the parent process. When the kernel has completed these functions it can do one of the following, as part of the dispatcher routine:

Stay in the parent process. Control returns to user mode at the point of the fork call of the parent.

Transfer control to the child process. The child process begins executing at the same point in the code as the parent, namely at the return from the fork call.

Transfer control to another process. Both parent and child are left in the Ready to Run state.

10. What are the Kernel's responsibilities to facilitate I/O transfer?

The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen. In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes.Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction, or system to crash. With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. This interface is normally done by a Device Driver or Hardware Abstraction Layer. Frequently, applications will require access to these devices. The Kernel must maintain the list of these devices by querying the system for them in some way. This can be done through the BIOS, or through one of the various system buses (such as PCI/PCIE, or USB). When an application requests an operation on a device (Such as displaying a character), the kernel needs to send this request to the current active video driver. The video driver, in turn, needs to carry out this request. This is an example of Inter Process Communication (IPC).

1. I/O Scheduling

IO scheduling is basically to determine in which order to execute for the given I/O request. The order in which these devices are executed are decided by kernel of the operating system. Scheduling improves overall performance of the system and can share access permission fairly to all the processes and reduce the waiting time, response time for I/O to complete.

2. Buffering:

A buffer is a memory area that stores data being transferred between two devices or between a device and an application. Buffering is done for three reasons. Buffer is also a temporary memory allocation of the computer system and it is a data shared by hardware devices or program processes that operate at different speeds or with different sets of priorities. The buffer allows each device or process to operate without being held up by the other.

3. Caching:

A cache is a region of fast memory that holds a copy of data. Access to the cached copy is much easier as compared to original file. The instruction of currently running process is stored on the disk, cached in physical memory, and copies again in the CPU’s secondary and primary cache. The main difference between buffer and cache is that a buffer may hold only the existing copy of data item, while cache holds a copy on a faster storage of an item and resides elsewhere in the system memory.

4. Error Handling:

Operating system is responsible for handling many hardware and software related errors that may cause applications to misinterpreted process running unexpectedly and that are causing system and application process crash. Operating system can guard many kinds of hardware and application errors, so that a complete system failure is not the usual result of each minor glitches. Device and I/O transfers can fail in many ways, either for transient reasons or when a network becomes overloaded.

5. I/O Protection:

To ensure CPU protection OS ensure that below case should not occur

View I/O of other process

Terminate I/O of another process

Give priority to a particular process I/O

If an application process wants to access any I/O device then it will be done through system call so that OS will monitor the task.

Like In C language write() and read() is a system call to read and write on file. There are two modes in instruction execute. They are user mode and kernel mode.

Week assignment:

1. C-SCAN disk scheduling algorithm

Circular-SCAN Algorithm is an improved version of the Scan Algorithm.

Head starts from one end of the disk and move towards the other end servicing all the requests in between.

After reaching the other end, head reverses its direction.

It then returns to the starting end without servicing any request in between.

The same process repeats.

2. Write a C program to simulate page replacement algorithms a) FIFO b) LRU c) LFU

FIFO:

#include

int main()

{

int i,j,n,a[50],frame[10],no,k,avail,count=0;

printf("\n ENTER THE NUMBER OF PAGES:\n");

scanf("%d",&n);

printf("\n ENTER THE PAGE NUMBER :\n");

for(i=1;i<=n;i++)

scanf("%d",&a[i]);

printf("\n ENTER THE NUMBER OF FRAMES :");

scanf("%d",&no);

for(i=0;i

frame[i]= -1;

j=0;

printf("\tref string\t page frames\n");

for(i=1;i<=n;i++)

{

printf("%d\t\t",a[i]);

avail=0;

for(k=0;k

if(frame[k]==a[i])

avail=1;

if (avail==0)

{

frame[j]=a[i];

j=(j+1)%no;

count++;

for(k=0;k

printf("%d\t",frame[k]);

}

printf("\n");

}

printf("Page Fault Is %d",count);

return 0;

}

LRU:

#include

int main()

{

int frames[10], temp[10], pages[10];

int total_pages, m, n, position, k, l, total_frames;

int a = 0, b = 0, page_fault = 0;

printf("\nEnter Total Number of Frames:\t");

scanf("%d", &total_frames);

for(m = 0; m < total_frames; m++)

{

frames[m] = -1;

}

printf("Enter Total Number of Pages:\t");

scanf("%d", &total_pages);

printf("Enter Values for Reference String:\n");

for(m = 0; m < total_pages; m++)

{

printf("Value No.[%d]:\t", m + 1);

scanf("%d", &pages[m]);

}

for(n = 0; n < total_pages; n++)

{

a = 0, b = 0;

for(m = 0; m < total_frames; m++)

{

if(frames[m] == pages[n])

{

a = 1;

b = 1;

break;

}

}

if(a == 0)

{

for(m = 0; m < total_frames; m++)

{

if(frames[m] == -1)

{

frames[m] = pages[n];

b = 1;

break;

}

}

}

if(b == 0)

{

for(m = 0; m < total_frames; m++)

{

temp[m] = 0;

}

for(k = n - 1, l = 1; l <= total_frames - 1; l++, k--)

{

for(m = 0; m < total_frames; m++)

{

if(frames[m] == pages[k])

{

temp[m] = 1;

}

}

}

for(m = 0; m < total_frames; m++)

{

if(temp[m] == 0)

position = m;

}

frames[position] = pages[n];

page_fault++;

}

printf("\n");

for(m = 0; m < total_frames; m++)

{

printf("%d\t", frames[m]);

}

}

printf("\nTotal Number of Page Faults:\t%d\n", page_fault);

return 0;

LFU:

#include

void main()

{

int q[20],p[50],c=0,c1,d,f,i,j,k=0,n,r,t,b[20],c2[20];

printf(“Enter no of pages:”);

scanf(“%d”,&n);

printf(“Enter the reference string:”);

for(i=0;i

scanf(“%d”,&p[i]);

printf(“Enter no of frames:”);

scanf(“%d”,&f);

q[k]=p[k];

printf(“\n\t%d\n”,q[k]);

c++;

k++;

for(i=1;i

{

c1=0;

for(j=0;j

{

if(p[i]!=q[j]) c1++;

}

if(c1==f)

{

c++;

if(k

{

q[k]=p[i];

k++;

for(j=0;j

printf(“\t%d”,q[j]);

printf(“\n”); }

else

{

for(r=0;r

{

c2[r]=0;

for(j=i-1;j

{

if(q[r]!=p[j]) c2[r]++; else break;

}

}

for(r=0;r

b[r]=c2[r];

for(r=0;r

{

for(j=r;j

{

if(b[r]

{

t=b[r];

b[r]=b[j];

b[j]=t;

}

}

}

for(r=0;r

{

if(c2[r]==b[0])

q[r]=p[i];

printf(“\t%d”,q[r]);

}

printf(“\n”);

}

}

}

printf(“\nThe no of page faults is %d”,c);

}

_____________________________________________________________________________________

Name: Mohd. Shehwaz

Course: MTECH (AI)

Roll No: 19mcmb19

Assignment - 7

1. What is a thread? Differentiate between user level threads and kernel level threads.

A thread is a flow of execution through the process code, with its own program counter that keeps track of which instruction to execute next, system registers which hold its current working variables, and a stack which contains the execution history.

A thread shares with its peer threads few information like code segment, data segment and open files. When one thread alters a code segment memory item, all other threads see that.

A thread is also called a lightweight process. Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process.

Each thread belongs to exactly one process and no thread can exist outside a process. Each thread represents a separate flow of control. Threads have been successfully used in implementing network servers and web server. They also provide a suitable foundation for parallel execution of applications on shared memory multiprocessors. The following figure shows the working of a single-threaded and a multithreaded process.

S.N.

Process

Thread

1

Process is heavy weight or resource intensive.

Thread is light weight, taking lesser resources than a process.

2

Process switching needs interaction with operating system.

Thread switching does not need to interact with operating system.

3

In multiple processing environments, each process executes the same code but has its own memory and file resources.

All threads can share same set of open files, child processes.

4

If one process is blocked, then no other process can execute until the first process is unblocked.

While one thread is blocked and waiting, a second thread in the same task can run.

5

Multiple processes without using threads use more resources.

Multiple threaded processes use fewer resources.

6

In multiple processes each process operates independently of the others.

One thread can read, write or change another thread's data.

2. What is the difference between a thread and a process? Discuss the merits/demerits of threads over processes.

Process

Thread

1) System calls involved in process.

1) No system calls involved.

2) Context switching required.

2) No context switching required.

3) Different process have different copies of code and data.

3) Sharing same copy of code and data can be possible among different threads..

4) Operating system treats different process differently.

4) All user level threads treated as single task for operating system.

5) If a process got blocked, remaining process continue their work.

5) If a user level thread got blocked, all other threads get blocked since they are treated as single task to OS. (Noted: This is can be avoided in kernel level threads).

6) Processes are independent.

6) Threads exist as subsets of a process. They are dependent.

7) Process run in separate memory space.

7) Threads run in shared memory space. And use memory of process which it belong to.

8) Processes have their own program counter (PC), register set, and stack space.

8) Threads share Code section, data section, Address space with other threads.

9) Communication between processes requires some time.

9) Communication between processes requires less time than processes.

10) Processes don’t share the memory with any other process.

10) Threads share the memory with other threads of the same process

11) Process have overhead.

11) Threads have no overhead.

3. What is a race condition? Explain using a suitable example.

A race condition is an undesirable situation that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.

A simple example of a race condition is a light switch. In some homes there are multiple light switches connected to a common ceiling light. When these types of circuits are used, the switch position becomes irrelevant. If the light is on, moving either switch from its current position turns the light off. Similarly, if the light is off, then moving either switch from its current position turns the light on. With that in mind, imagine what might happen if two people tried to turn on the light using two different switches at exactly the same time. One instruction might cancel the other or the two actions might trip the circuit breaker.

Race conditions are most commonly associated with computer science. In computer memory or storage, a race condition may occur if commands to read and write a large amount of data are received at almost the same instant, and the machine attempts to overwrite some or all of the old data while that old data is still being read. The result may be one or more of the following: a computer crash, an "illegal operation," notification and shutdown of the program, errors reading the old data or errors writing the new data. A race condition can also occur if instructions are processed in the incorrect order.

Suppose for a moment that two processes need to perform a bit flip at a specific memory location. Under normal circumstances the operation should work like this:

In this example, Process 1 performs a bit flip, changing the memory value from 0 to 1. Process 2 then performs a bit flip and changes the memory value from 1 to 0.

Process 1

Process 2

Memory Value

Read value

 

0

Flip value

 

1

 

Read value

1

 

Flip value

0

If a race condition occurred causing these two processes to overlap, the sequence could potentially look more like this:

Process 1

Process 2

Memory Value

Read value

 

0

 

Read value

0

Flip value

 

1

 

Flip value

1

In this example, the bit has an ending value of 1 when its value should be 0. This occurs because Process 2 is unaware that Process 1 is performing a simultaneous bit flip.

4. What do you understand by critical section? What are the charcteristic properties of it? Explain.

Critical Section is the part of a program which tries to access shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device.

The critical section cannot be executed by more than one process at the same time; operating system faces the difficulties in allowing and disallowing the processes from entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that the Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical section problem. We need to provide a solution in such a way that the following conditions can be satisfied.

Requirements of Synchronization mechanisms

Primary

1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if one process is executing inside critical section then the other process must not enter in the critical section.

2. Progress

Progress means that if one process doesn't need to execute into critical section then it should not stop other processes to get into the critical section.

Secondary

1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the critical section. The process must not be endlessly waiting for getting into the critical section.

2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is working fine on one architecture then it should also run on the other ones as well.

5. What is mutual exclusion? Discuss the different approaches to solve the problem of mutual exclusion.

A mutual exclusion (mutex) is a program object that prevents simultaneous access to a shared resource. This concept is used in concurrent programming with a critical section, a piece of code in which processes or threads access a shared resource. Only one thread owns the mutex at a time, thus a mutex with a unique name is created when a program starts. When a thread holds a resource, it has to lock the mutex from other threads to prevent concurrent access of the resource. Upon releasing the resource, the thread unlocks the mutex.

Mutual exclusion in single computer system Vs. distributed system:

In single computer system, memory and other resources are shared between different processes. The status of shared resources and the status of users is easily available in the shared memory so with the help of shared variable (For example: Semaphores) mutual exclusion problem can be easily solved.

In Distributed systems, we neither have shared memory nor a common physical clock and there for we can not solve mutual exclusion problem using shared variables. To eliminate the mutual exclusion problem in distributed system approach based on message passing is used.

A site in distributed system do not have complete information of state of the system due to lack of shared memory and a common physical clock.

Requirements of Mutual exclusion Algorithm:

No Deadlock:

Two or more site should not endlessly wait for any message that will never arrive.

No Starvation:

Every site who wants to execute critical section should get an opportunity to execute it in finite time. Any site should not wait indefinitely to execute critical section while other site are repeatedly executing critical section

Fairness:

Each site should get a fair chance to execute critical section. Any request to execute critical section must be executed in the order they are made i.e Critical section execution requests should be executed in the order of their arrival in the system.

Fault Tolerance:

In case of failure, it should be able to recognize it by itself in order to continue functioning without any disruption.

Solution to distributed mutual exclusion:

As we know shared variables or a local kernel can not be used to implement mutual exclusion in distributed systems. Message passing is a way to implement mutual exclusion. Below are the three approaches based on message passing to implement mutual exclusion in distributed systems:

Token Based Algorithm:

A unique token is shared among all the sites.

If a site possesses the unique token, it is allowed to enter its critical section

This approach uses sequence number to order requests for the critical section.

Each requests for critical section contains a sequence number. This sequence number is used to distinguish old and current requests.

This approach insures Mutual exclusion as the token is unique

Example:

Suzuki-Kasami’s Broadcast Algorithm

Non-token based approach:

A site communicates with other sites in order to determine which sites should execute critical section next. This requires exchange of two or more successive round of messages among sites.

This approach use timestamps instead of sequence number to order requests for the critical section.

Whenever a site make request for critical section, it gets a timestamp. Timestamp is also used to resolve any conflict between critical section requests.

All algorithm which follows non-token based approach maintains a logical clock. Logical clocks get updated according to Lamport’s scheme

Example:

Lamport's algorithm, Ricart–Agrawala algorithm

Quorum based approach:

Instead of requesting permission to execute the critical section from all other sites, Each site requests only a subset of sites which is called a quorum.

Any two subsets of sites or Quorum contains a common site.

This common site is responsible to ensure mutual exclusion

Example:

Maekawa’s Algorithm

6. What do you understand by semphores? Does it satisfy the bounded wait condition? Explain.

Definition - What does Semaphore mean?

A semaphore is a synchronization object that controls access by multiple processes to a common resource in a parallel programming environment. Semaphores are widely used to control access to files and shared memory. The three basic functionalities associated with semaphores are set, check and wait until it clears to set it again. |

Semaphores are used to address benchmark synchronization problems.

The concept of semaphore was put forth by the Dutch computer scientist Edsger Dijkstra.

Semaphores are non-negative integer values that support the operations semaphore->P () and semaphore->V (). P is an atomic operation that waits for a semaphore to be positive and then decrements it by one, while V is an atomic operation that increments a semaphore by one, which implies it wakes up a waiting P. Test and set associated with semaphore are routines implemented in hardware to coordinate lower-level critical sections.

It may break bounded waiting condition theoretically as you'll see below. Practically, it depends heavily on which scheduling algorithm is used.

The classic implementation of wait() and signal() primitive is as:

//primitive

wait(semaphore* S)

{

S->value--;

if (S->value < 0)

{

add this process to S->list;

block();

}

}

//primitive

signal(semaphore* S)

{

S->value++;

if (S->value <= 0)

{

remove a process P from S->list;

wakeup(P);

}

}

When a process calls the wait() and fails the "if" test, it will put itself into a waiting list. If more than one processe are blocked on the same semaphore, they're all put into this list(or they are somehow linked together as you can imagine). When another process leaves critical section and calls signal(), one process in the waiting list will be chosen to wake up, ready to compete for CPU again. However, it's the scheduler who decides which process to pick from the waiting list. If the scheduling is implemented in a LIFO(last in first out) manner for instance, it's possible that some process are starved.

Example

T1: thread 1 calls wait(), enters critical section

T2: thread 2 calls wait(),