国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

CSCI 4210 — Operating Systems

時間:2024-08-19  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯


CSCI 4210  Operating Systems

Simulation Project Part II (document version 1.0)

Processes and CPU Scheduling

Overview

•  This assignment is due in Submitty by 11:59PM EST on Thursday, August 15, 2024

•  This project is to be completed either individually or in a team of at most three students; as with Project Part I, form your team within the Submitty gradeable, but do not submit any code until we announce that auto-grading is available

•  NEW: If you worked on a team for PartI, feel free to change your team for Part II; all code is reusable from Part I even if you change teams

•  Beyond your team (or yourself if working alone), do not share your code; however, feel free to discuss the project content and your findings with one another on our Discussion Forum

•  To appease Submitty, you must use one of the following programming languages:  C, C++, or Python (be sure you choose only one language for your entire implementation)

• You will have ve penalty-free submissions on Submitty, after which points will slowly be deducted, e.g., -1 on submission #6, etc.

• You can use at most three late days on this assignment; in such cases, each team member must use a late day

• You will have at least three days before the due date to submit your code to Submitty; if the auto-grading is not available three days before the due date, the due date will be 11:59PM EDT three days after auto-grading becomes available

•  NEW: Given that your simulation results might not entirely match the expected output on Submitty, we will cap your auto-graded grade at 50  points even though there will be more than 50 auto-graded points per language available in Submitty

• All submitted code must successfully compile and run on Submitty, which currently uses Ubuntu v22.04.4 LTS

• If you use C or C++, your program must successfully compile via gcc org++ with no warning messages when the -Wall  (i.e., warn all) compiler option is used; we will also use -Werror, which will treat all warnings as critical errors; the -lm flag will also be included; the gcc/g++ compiler is currently version 11.4.0 (Ubuntu  11.4.0-1ubuntu1~22.04)

•  For source file naming conventions, be sure to use * .c for C and * .cpp for C++; in either case, you can also include * .h files

• For Python, you must use python3, which is currently Python 3.10.12; be sure to name your main Python file project .py; also be sure no warning messages or extraneous output occur during interpretation

•  Please “flatten” all directory structures to a single directory of source files

•  Note that you can use square brackets in your code

Project specifications

For Part II of our simulation project, given the set of processes pseudo-randomly generated in Part I, you will implement a series of simulations of a running operating system. The overall focus will again be on processes, assumed to be resident in memory, waiting to use the CPU. Memory and the I/O subsystem will not be covered in depth in either part of this project.

Conceptual design  (from Part I)

process is defined as a program in execution.  For this assignment, processes are in one of the following three states, corresponding to the picture shown further below.

•  RUNNING: actively using the CPU and executing instructions

•  READY: ready to use the CPU, i.e., ready to execute a CPU burst

• WAITING: blocked on I/O or some other event

RUNNING                      READY                                   WAITING  (on  I/O) STATE                     STATE                                     STATE

+-----+                                                             +---------------------+

|           |          +-------------------+          |                                          |

|  CPU   |   <==  |         |         |         |         |              |         I/O  Subsystem          |

|           |          +-------------------+          |                                          |

+-----+           <<<  queue  <<<<<<<<<           +---------------------+

Processes in the READY  state reside in a queue called the ready queue.  This queue is ordered based on a configurable CPU scheduling algorithm.  You will implement specific CPU scheduling algorithms in Part II of this project.

All implemented algorithms (in Part II) will be simulated for the same  set  of processes, which will therefore support a comparative analysis of results. In Part I, the focus is on generating useful sets of processes via pseudo-random number generators.

Back to the conceptual model, when a process is in the READY state and reaches the front of the queue, once the CPU is free to accept the next process, the given process enters the RUNNING state and starts executing its CPU burst.

After each CPU burst is completed, if the process does not terminate, the process enters the WAITING  state, waiting for an I/O operation to complete (e.g., waiting for data to be read in from a file).  When the I/O operation completes, depending on the scheduling algorithm, the process either (1) returns to the READY  state and is added to the ready queue or (2) preempts the currently running process and switches into the RUNNING state.

Note that preemptions occur only for certain algorithms.

Algorithms — (Part II)

The four algorithms that you must simulate are first-come-first-served (FCFS); shortest job first (SJF); shortest remaining time (SRT); and round robin (RR). When you run your program, all four algorithms are to be simulated in succession with the same initial set of processes.

Each algorithm is summarized below.

First-come-first-served  (FCFS)

The FCFS algorithm is a non-preemptive algorithm in which processes simply line up in the ready queue, waiting to use the CPU. This is your baseline algorithm.

Shortest job first  (SJF)

In SJF, processes are stored in the ready queue in order of priority based on their anticipated CPU burst times.  More specifically, the process with the shortest predicted CPU burst time will be selected as the next process executed by the CPU. SJF is non-preemptive.

Shortest remaining time  (SRT)

The SRT algorithm is a preemptive version of the SJF algorithm. In SRT, when a process arrives, if it has a predicted CPU burst time that is less than the remaining predicted time of the currently running process, a preemption occurs.  When such a preemption occurs, the currently running process is added to the ready queue based on priority, i.e., based on its remaining predicted CPU burst time.

Round robin  (RR)

The RR algorithm is essentially the FCFS algorithm with time slice t slice.  Each process is given t slice  amount of time to complete its CPU burst. If the time slice expires, the process is preempted and added to the end of the ready queue.

If a process completes its CPU burst before a time slice expiration, the next process on the ready queue is context-switched in to use the CPU.

For your simulation, if a preemption occurs and there are no other processes on the ready queue, do not perform a context switch. For example, given process G is using the CPU and the ready queue is empty, if process G is preempted by a time slice expiration, do not context-switch process G back to the empty queue; instead, keep process G running with the CPU and do not count this as a context switch. In other words, when the time slice expires, check the queue to determine if a context switch should occur.

 

Simulation configuration  (extended from Part I)

The key to designing a useful simulation is to provide a number of configurable parameters. This allows you to simulate and tune for a variety of scenarios, e.g., a large number of CPU-bound processes, difering average process interarrival times, multiple CPUs, etc.

Define the simulation parameters shown below as tunable constants within your code, all of which will be given as command-line arguments. In Part II of the project, additional parameters will be added.

•  *(argv+1):  Define n as the number of processes to simulate.  Process IDs are assigned a two-character code consisting of an uppercase letter from A to Z followed by a number from

0 to 9. Processes are assigned in order A0, A1, A2, . . ., A9, B0, B1, . . ., Z9.

•  *(argv+2): Definen cpu as the number of processes that are CPU-bound. For this project, we will classify processes as I/O-bound or CPU-bound.  The n cpu   CPU-bound processes, when generated, will have CPU burst times that are longer by a factor of 4 and will have I/O burst times that are shorter by a factor of 8.

•  *(argv+3):  We will use a pseudo-random number generator to determine the interarrival times  of CPU bursts.  This command-line argument, i.e. seed, serves as the seed for the pseudo-random number sequence. To ensure predictability and repeatability, use srand48() with this given seed before simulating each  scheduling algorithm and drand48() to obtain the next value in the range [0.0, 1.0). Since Python does not have these functions, implement an equivalent 48-bit linear congruential generator, as described in the man page for these functions in C.

•  *(argv+4): To determine interarrival times, we will use an exponential distribution, as illus- trated in the exp-random .c example. This command-line argument is parameter λ; remember

that λ/1 will be the average random value generated, e.g., if λ = 0.01, then the average should be appoximately 100.

In the exp-random .c example, use the formula shown in the code, i.e., λ/− ln r.

•  *(argv+5):  For the exponential distribution, this command-line argument represents the upper bound for valid pseudo-random numbers.  This threshold is used to avoid values far down the long tail of the exponential distribution.  As an example, if this is set to 3000, all generated values above 3000 should be skipped. For cases in which this value is used in the ceiling function (see the next page), be sure the ceiling is still valid according to this upper bound.

•  *(argv+6): Define tcs  as the time, in milliseconds, that it takes to perform a context switch. Specifically, the first half of the context switch time (i.e., 2/tcs) is the time required to remove the given process from the CPU; the second half of the context switch time is the time required to bring the next process in to use the CPU. Therefore, require tcs  to be a positive even integer.

 

•  *(argv+7): For the SJF and SRT algorithms, since we do not know the actual CPU burst times beforehand, we will rely on estimates determined via exponential averaging.  As such, this command-line argument is the constant Q, which must be a numeric floating-point value in the range [0; 1].

Note that the initial guess for each process is τ0  = λ/1 .

Also, when calculating τ values, use the “ceiling” function for all calculations.

•  *(argv+8): For the RR algorithm, define the time slice value,t slice, measured in milliseconds. Require t slice  to be a positive integer.

Pseudo-random numbers and predictability  (from Part I)

A key aspect of this assignment is to compare the results of each of the simulated algorithms with one another given the same initial conditions, i.e., the same initial set of processes.

To ensure each CPU scheduling algorithm runs with the same set of processes, carefully follow the algorithm below to create the set of processes.

For each of the n processes, in order A0 through Z9, perform the steps below, with CPU-bound processes generated first. Note that all generated values are integers.

Define your exponential distribution pseudo-random number generation function as next_exp() (or another similar name).

1. Identify the initial process arrival time as the “floor” of the next random number in the sequence given by next_exp(); note that you could therefore have a zero arrival time

2. Identify the number of CPU bursts for the given process as the “ceiling” of the next random number generated from the uniform distribution obtained via drand48() multiplied by **; this should obtain a random integer in the inclusive range [1; **]

3. For each  of these CPU bursts, identify the CPU burst time and the I/O burst time as the “ceiling” of the next two random numbers in the sequence given by next_exp(); multiply the I/O burst time by 8 such that I/O burst time is close to an order of magnitude longer than CPU burst time; as noted above, for CPU-bound processes, multiply the CPU burst time by 4 and divide the I/O burst time by 8 (i.e., do not bother multiplying the original I/O burst time by 8 in this case); for the last CPU burst, do not generate an I/O burst time (since each process ends with a final CPU burst)

Simulation specifics  (Part II)

Your simulator keeps track of elapsed time t (measured in milliseconds), which is initially zero for each scheduling algorithm.  As your simulation proceeds, t  advances to each “interesting” event that occurs, displaying a specific line of output that describes each event.

The “interesting” events are:

•  Start of simulation for a specific algorithm

•  Process arrival (i.e., initially and at each I/O completion)

•  Process starts using the CPU

•  Process finishes using the CPU (i.e., completes a CPU burst)

•  Process has its τ value recalculated (i.e., after a CPU burst completion)

•  Process preemption (SRT and RR only)

•  Process starts an I/O burst

•  Process finishes an I/O burst

•  Process terminates by finishing its last CPU burst

• End of simulation for a specific algorithm

Note that the “process arrival” event occurs each time a process arrives, which includes both the initial arrival time and when a process completes an I/O burst. In other words, processes “arrive” within the subsystem that consists only of the CPU and the ready queue.

The “process preemption” event occurs each time a process is preempted.  When a preemption occurs, a context switch occurs, except when the ready queue is empty for the RR algorithm.

After you simulate each scheduling algorithm, you must reset your simulation back to the initial set of processes and set your elapsed time back to zero.

Note that there may be times during your simulation in which the simulated CPU is idle because no processes have arrived yet or all processes are busy performing I/O. Also, your simulation ends when all processes terminate.

If diferent types of events occur at the same time, simulate these events in the following order:

(a) CPU burst completion; (b) process starts using the CPU; (c) I/O burst completions; and

(d) new process arrivals.

Further, any “ties” that occur within  one of these categories are to be broken using process ID order.  As an example, if processes G1  and S9 happen to both complete I/O bursts at the same time, process G1 wins this “tie” (because G1 is lexicographically before S9) and is therefore added to the ready queue before process S9.

Be sure you do not implement any additional logic for the I/O subsystem.  In other words, there are no specific I/O queues to implement.

Measurements  (from Part I)

There are a number of measurements you will want to track in your simulation. For each algorithm, you will count the number of preemptions and the number of context switches that occur. Further, you will measure CPU utilization by tracking CPU usage and CPU idle time.

Specifically, for each  CPU  burst, you will track CPU burst time (given), turnaround time, and wait time.

CPU burst time

CPU burst times are randomly generated for each process that you simulate via the above algorithm. CPU burst time is defined as the amount of time a process is actually using the CPU. Therefore, this measure does not include context switch times.

Turnaround time

Turnaround times are to be measured for each process that you simulate.  Turnaround time is defined as the end-to-end time a process spends in executing a single  CPU  burst.

More specifically, this is measured from process arrival time through to when the CPU burst is completed and the process is switched out of the CPU. Therefore, this measure includes the second half of the initial context switch in and the first half of the final context switch out, as well as any other context switches that occur while the CPU burst is being completed (i.e., due to preemptions).

Wait time

Wait times are to be measured for each CPU burst. Wait time is defined as the amount of time a process spends waiting to use the CPU, which equates to the amount of time the given process is actually in the ready queue. Therefore, this measure does not include context switch times that the given process experiences, i.e., only measure the time the given process is actually in the ready queue.

CPU utilization

Calculate CPU utilization by tracking how much time the CPU is actively running CPU bursts versus total elapsed simulation time.

 

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



 

掃一掃在手機打開當前頁
  • 上一篇:代寫COMP501 ICT Fundamentals
  • 下一篇:BISM1201代做、代寫Python/Java程序語言
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    高清国产在线一区| 精品无人乱码一区二区三区的优势| 欧美激情视频给我| 热久久这里只有| 69av在线播放| 久久久久久高潮国产精品视| 欧美诱惑福利视频| 久久欧美在线电影| 欧美精品videofree1080p| 欧美日韩二三区| 久久人人爽人人爽人人片av高清 | 麻豆久久久av免费| www.久久撸.com| 日韩资源av在线| 91传媒免费视频| 亚洲色图自拍| 国产精品午夜一区二区欲梦| 久久综合久久美利坚合众国| 日韩免费av一区二区| 久久久亚洲精品视频| 亚洲一区二区三区精品视频| 国产美女久久精品| 欧美激情精品久久久久久变态 | 中文字幕在线乱| 国产日韩一区二区在线观看| 国产精品免费看一区二区三区| 欧洲精品国产| 色婷婷av一区二区三区久久| 日本欧美国产在线| 久久久久久有精品国产| 日韩人妻精品无码一区二区三区| 久久偷窥视频| 日本不卡免费新一二三区| 国产chinese精品一区二区| 日本中文不卡| 日韩视频精品在线| 激情小说综合网| 久久精品国产欧美激情| 欧美在线一区二区三区四| 久久九九热免费视频| 欧美 日韩 国产 高清| 国产精品久久久久久搜索| 黄色小网站91| 国产精品激情av在线播放| 国产中文字幕亚洲| 久久97久久97精品免视看 | 日本欧美色综合网站免费| 久久爱av电影| 欧美欧美一区二区| 国产精品福利在线观看| 精品一区二区三区日本| 久久国产精品久久精品| www日韩视频| 日本在线精品视频| 北条麻妃99精品青青久久| 国产一区二区三区四区五区加勒比| 在线视频一二三区| 久久久之久亚州精品露出| 精品人妻少妇一区二区 | 欧美性视频网站| 国产精品国产对白熟妇| 成人免费福利在线| 日本午夜精品电影| 国产精品久在线观看| 国产剧情日韩欧美| 色999五月色| 国产精品视频中文字幕91| 韩国国内大量揄拍精品视频| 欧美极品欧美精品欧美视频| 91精品国产综合久久香蕉| 青青精品视频播放| 精品九九九九| 国产精品69页| 国严精品久久久久久亚洲影视| 一区二区视频在线播放| 国产成人艳妇aa视频在线| 精品视频第一区| 少妇av一区二区三区无码| 国产精品丝袜久久久久久高清 | 久久久久久久9| 国产在线精品日韩| 亚州av一区二区| 国产精品网站视频| 99电影网电视剧在线观看| 青青草原av在线播放| 中文字幕日韩精品无码内射| 日韩在线视频国产| 国产精品一区在线免费观看| 日本三级中文字幕在线观看| 国产精品偷伦一区二区| 99在线高清视频在线播放| 欧美精品在欧美一区二区| 亚洲蜜桃在线| 国产精品啪视频| 久久久亚洲国产| 国产一级不卡毛片| 日韩精品一区二区三区四区五区 | 久久精品视频99| av免费观看久久| 日韩精品国内| 亚洲一区制服诱惑| 国产精品国模大尺度私拍| 成人av免费在线看| 免费观看亚洲视频| 人人妻人人做人人爽| 午夜精品一区二区在线观看的| 毛片精品免费在线观看| 日韩有码视频在线| 久久久性生活视频| 国产欧美欧洲| 免费久久99精品国产自| 欧美日韩一道本| 日本最新一区二区三区视频观看| 欧美激情中文字幕在线| 国产精品国产三级国产aⅴ浪潮 | 欧美日韩在线成人| 日本wwwcom| 日本一区二区黄色| 亚洲精品高清视频| 亚洲一二三区精品| 久久久久成人精品| 蜜臀久久99精品久久久久久宅男| 国产精品视频网| 爽爽爽爽爽爽爽成人免费观看| 国产黄视频在线| 8090成年在线看片午夜| 国产精品一区二区三区免费 | 国产精品一区二区久久久久| 黄色一区三区| 欧美一级黑人aaaaaaa做受| 日本www高清视频| 日韩美女中文字幕| 奇米一区二区三区四区久久| 日韩激情免费视频| 欧美日韩一区二| 男人的天堂狠狠干| 欧美日韩免费观看一区| 欧美日韩精品免费观看| 欧美在线播放cccc| 欧美中文字幕在线视频| 日韩在线国产| 午夜精品免费视频| 欧美一级片免费在线| 日本久久久网站| 欧美日本韩国在线| 国产一区二区三区色淫影院| 国产精品亚洲欧美导航| 91成人福利在线| 久久久久久久久国产精品| 国产精品人成电影| 不卡av电影在线观看| 久久999免费视频| 亚洲一区中文字幕在线观看| 性欧美精品一区二区三区在线播放 | 99九九视频| 久久青草福利网站| 久久精品日产第一区二区三区| 久草资源站在线观看| 久久精品色欧美aⅴ一区二区| 久久久精品国产一区二区| 国产精品第一视频| 亚洲一区二区三区精品动漫| 日韩免费观看网站| 精品一区二区久久久久久久网站| 国产精品一区二区久久| 国产国语刺激对白av不卡| 久久精品国产一区| 国产精品久久久久久亚洲影视| 久久综合免费视频| 亚洲国产欧美日韩| 欧美一区免费视频| 丰满人妻中伦妇伦精品app| 国产精品18久久久久久首页狼| 久久久久久久久久久免费视频| 国产精品美乳在线观看| 亚洲视频欧美在线| 欧美日韩无遮挡| jizzjizz国产精品喷水| 日韩中文字幕国产| 在线观看一区欧美| 欧洲亚洲一区二区三区四区五区| 国产在线999| 97国产一区二区精品久久呦| 久久精品日产第一区二区三区| 国产精品国产三级国产专播精品人| 亚洲精品中文字幕在线| 激情伊人五月天| 久久人人爽国产| 精品免费久久久久久久| 人妻熟女一二三区夜夜爱| 成人免费淫片aa视频免费| 久久久久久噜噜噜久久久精品| 精品国产乱码久久久久久郑州公司| 日韩一区二区三区资源| 激情综合网俺也去| 国产高清精品软男同| 欧美激情一区二区三级高清视频| 日韩a在线播放| 国产美女精品久久久| 久久久久亚洲精品成人网小说|