国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP9414代做、代寫Python程序設計

時間:2024-07-21  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9414 24T2
Artificial Intelligence
Assignment 2 - Reinforcement Learning
Due: Week 9, Wednesday, 24 July 2024, 11:55 PM.
1 Problem context
Taxi Navigation with Reinforcement Learning: In this assignment,
you are asked to implement Q-learning and SARSA methods for a taxi nav-
igation problem. To run your experiments and test your code, you should
make use of the Gym library1, an open-source Python library for developing
and comparing reinforcement learning algorithms. You can install Gym on
your computer simply by using the following command in your command
prompt:
pip i n s t a l l gym
In the taxi navigation problem, there are four designated locations in the
grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the
episode starts, one taxi starts off at a random square and the passenger is
at a random location (one of the four specified locations). The taxi drives
to the passenger’s location, picks up the passenger, drives to the passenger’s
destination (another one of the four specified locations), and then drops off
the passenger. Once the passenger is dropped off, the episode ends. To show
the taxi grid world environment, you can use the following code:
1https://www.gymlibrary.dev/environments/toy text/taxi/
1
env = gym .make(”Taxi?v3 ” , render mode=”ans i ” ) . env
s t a t e = env . r e s e t ( )
rendered env = env . render ( )
p r i n t ( rendered env )
In order to render the environment, there are three modes known as
“human”, “rgb array, and “ansi”. The “human” mode visualizes the envi-
ronment in a way suitable for human viewing, and the output is a graphical
window that displays the current state of the environment (see Fig. 1). The
“rgb array” mode provides the environment’s state as an RGB image, and
the output is a numpy array representing the RGB image of the environment.
The “ansi” mode provides a text-based representation of the environment’s
state, and the output is a string that represents the current state of the
environment using ASCII characters (see Fig. 2).
Figure 1: “human” mode presentation for the taxi navigation problem in
Gym library.
You are free to choose the presentation mode between “human” and
“ansi”, but for simplicity, we recommend “ansi” mode. Based on the given
description, there are six discrete deterministic actions that are presented in
Table 1.
For this assignment, you need to implement the Q-learning and SARSA
algorithms for the taxi navigation environment. The main objective for this
assignment is for the agent (taxi) to learn how to navigate the gird-world
and drive the passenger with the minimum possible steps. To accomplish
the learning task, you should empirically determine hyperparameters, e.g.,
the learning rate α, exploration parameters (such as ? or T ), and discount
factor γ for your algorithm. Your agent should be penalized -1 per step it
2
Figure 2: “ansi” mode presentation for the taxi navigation problem in Gym
library. Gold represents the taxi location, blue is the pickup location, and
purple is the drop-off location.
Table 1: Six possible actions in the taxi navigation environment.
Action Number of the action
Move South 0
Move North 1
Move East 2
Move West 3
Pickup Passenger 4
Drop off Passenger 5
takes, receive a +20 reward for delivering the passenger, and incur a -10
penalty for executing “pickup” and “drop-off” actions illegally. You should
try different exploration parameters to find the best value for exploration
and exploitation balance.
As an outcome, you should plot the accumulated reward per episode and
the number of steps taken by the agent in each episode for at least 1000
learning episodes for both the Q-learning and SARSA algorithms. Examples
of these two plots are shown in Figures 3–6. Please note that the provided
plots are just examples and, therefore, your plots will not be exactly like the
provided ones, as the learning parameters will differ for your algorithm.
After training your algorithm, you should save your Q-values. Based on
your saved Q-table, your algorithms will be tested on at least 100 random
grid-world scenarios with the same characteristics as the taxi environment for
both the Q-learning and SARSA algorithms using the greedy action selection
3
Figure 3: Q-learning reward. Figure 4: Q-learning steps.
Figure 5: SARSA reward. Figure 6: SARSA steps.
method. Therefore, your Q-table will not be updated during testing for the
new steps.
Your code should be able to visualize the trained agent for both the Q-
learning and SARSA algorithms. This means you should render the “Taxi-
v3” environment (you can use the “ansi” mode) and run your trained agent
from a random position. You should present the steps your agent is taking
and how the reward changes from one state to another. An example of the
visualized agent is shown in Fig. 7, where only the first six steps of the taxi
are displayed.
2 Testing and discussing your code
As part of the assignment evaluation, your code will be tested by tutors
along with you in a discussion carried out in the tutorial session in week 10.
The assignment has a total of 25 marks. The discussion is mandatory and,
therefore, we will not mark any assignment not discussed with tutors.
Before your discussion session, you should prepare the necessary code for
this purpose by loading your Q-table and the “Taxi-v3” environment. You
should be able to calculate the average number of steps per episode and the
4
Figure 7: The first six steps of a trained agent (taxi) based on Q-learning
algorithm.
average accumulated reward (for a maximum of 100 steps for each episode)
for the test episodes (using the greedy action selection method).
You are expected to propose and build your algorithms for the taxi nav-
igation task. You will receive marks for each of these subsections as shown
in Table 2. Except for what has been mentioned in the previous section, it is
fine if you want to include any other outcome to highlight particular aspects
when testing and discussing your code with your tutor.
For both Q-learning and SARSA algorithms, your tutor will consider the
average accumulated reward and the average taken steps for the test episodes
in the environment for a maximum of 100 steps for each episode. For your Q-
learning algorithm, the agent should perform at most 14 steps per episode on
average and obtain a minimum of 7 average accumulated reward. Numbers
worse than that will result in a score of 0 marks for that specific section.
For your SARSA algorithm, the agent should perform at most 15 steps per
episode on average and obtain a minimum of 5 average accumulated reward.
Numbers worse than that will result in a score of 0 marks for that specific
section.
Finally, you will receive 1 mark for code readability for each task, and
your tutor will also give you a maximum of 5 marks for each task depending
on the level of code understanding as follows: 5. Outstanding, 4. Great,
3. Fair, 2. Low, 1. Deficient, 0. No answer.
5
Table 2: Marks for each task.
Task Marks
Results obtained from agent learning
Accumulated rewards and steps per episode plots for Q-learning
algorithm.
2 marks
Accumulated rewards and steps per episode plots for SARSA
algorithm.
2 marks
Results obtained from testing the trained agent
Average accumulated rewards and average steps per episode for
Q-learning algorithm.
2.5 marks
Average accumulated rewards and average steps per episode for
SARSA algorithm.
2.5 marks
Visualizing the trained agent for Q-learning algorithm. 2 marks
Visualizing the trained agent for SARSA algorithm. 2 marks
Code understanding and discussion
Code readability for Q-learning algorithm 1 mark
Code readability for SARSA algorithm 1 mark
Code understanding and discussion for Q-learning algorithm 5 mark
Code understanding and discussion for SARSA algorithm 5 mark
Total marks 25 marks
3 Submitting your assignment
The assignment must be done individually. You must submit your assignment
solution by Moodle. This will consist of a single .zip file, including three
files, the .ipynb Jupyter code, and your saved Q-tables for Q-learning and
SARSA (you can choose the format for the Q-tables). Remember your files
with your Q-tables will be called during your discussion session to run the
test episodes. Therefore, you should also provide a script in your Python
code at submission to perform these tests. Additionally, your code should
include short text descriptions to help markers better understand your code.
Please be mindful that providing clean and easy-to-read code is a part of
your assignment.
Please indicate your full name and your zID at the top of the file as a
comment. You can submit as many times as you like before the deadline –
later submissions overwrite earlier ones. After submitting your file a good
6
practice is to take a screenshot of it for future reference.
Late submission penalty: UNSW has a standard late submission
penalty of 5% per day from your mark, capped at five days from the as-
sessment deadline, after that students cannot submit the assignment.
4 Deadline and questions
Deadline: Week 9, Wednesday 24 of July 2024, 11:55pm. Please use the
forum on Moodle to ask questions related to the project. We will prioritise
questions asked in the forum. However, you should not share your code to
avoid making it public and possible plagiarism. If that’s the case, use the
course email cs9414@cse.unsw.edu.au as alternative.
Although we try to answer questions as quickly as possible, we might take
up to 1 or 2 business days to reply, therefore, last-moment questions might
not be answered timely.
For any questions regarding the discussion sessions, please contact directly
your tutor. You can have access to your tutor email address through Table
3.
5 Plagiarism policy
Your program must be entirely your own work. Plagiarism detection software
might be used to compare submissions pairwise (including submissions for
any similar projects from previous years) and serious penalties will be applied,
particularly in the case of repeat offences.
Do not copy from others. Do not allow anyone to see your code.
Please refer to the UNSW Policy on Academic Honesty and Plagiarism if you
require further clarification on this matter.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機打開當前頁
  • 上一篇:COMP9021代做、代寫python設計程序
  • 下一篇:COMP6008代做、代寫C/C++,Java程序語言
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    亚州av一区二区| 91精品成人久久| 亚洲欧洲一区二区在线观看| 最新av在线免费观看| 欧美精品久久久久久久久久| 精品国产乱码久久久久久88av | 欧美激情亚洲综合一区| 精品国产乱码久久久久久88av| 精品不卡在线| 亚洲色成人一区二区三区小说| 亚洲精品国产精品久久| 日韩不卡一二区| 精品日本一区二区三区| 国模精品一区二区三区 | 青青在线免费观看| 日本精品一区二区三区在线 | 国产无限制自拍| av动漫在线免费观看| 国产黄色特级片| 久久精品国产清自在天天线 | 亚洲欧美国产一区二区| 亚洲欧美影院| 欧美一区二区三区……| 欧美在线精品免播放器视频| 国产自产在线视频| 91国产丝袜在线放| 国产精品女人网站| 亚洲精品中文字幕乱码三区不卡| 日本久久久a级免费| 美女亚洲精品| 久久免费精品视频| 国产精品久久久久久av福利| 亚洲欧美日韩精品在线| 欧美亚洲成人网| 97国产精品视频| 久久精品人人做人人爽| 中国丰满熟妇xxxx性| 热久久这里只有| aaa免费在线观看| 国产精品免费在线播放| 午夜精品短视频| 国产在线一区二区三区播放| 久久免费99精品久久久久久| 久久亚洲电影天堂| 日本一区二区在线| 不卡视频一区二区三区| 国产精品日本一区二区| 视频一区二区三区在线观看| 麻豆精品视频| 日韩在线视频线视频免费网站| 一区二区精品免费视频| 国内一区在线| 麻豆av一区二区三区久久| 国产不卡精品视男人的天堂| 欧美激情一级欧美精品| 国自在线精品视频| 日韩在线中文字幕| 日本新janpanese乱熟| 国产精品一色哟哟| 国产精品久久久久久久久借妻| 日韩a在线播放| 免费在线精品视频| 久久久久久久久爱| 亚洲成人精品电影在线观看| 国产伦精品一区二区三区免费视频| 国产成人啪精品视频免费网| 亚洲精品成人a8198a| 精品视频在线观看一区二区| 久久精品国产亚洲精品2020| 色综合久久av| 91av成人在线| 亚洲乱码一区二区三区| 国产免费黄色一级片| 国产精品久久久久免费a∨大胸| 日韩免费毛片| 久久精品无码中文字幕| 日韩av色在线| 国产精品99久久久久久久久久久久 | 久久人人爽亚洲精品天堂| 亚洲成人一区二区三区| 国产男女猛烈无遮挡91| 欧美成aaa人片免费看| 激情小说综合区| 国产精品日日摸夜夜添夜夜av| 奇米888一区二区三区| 国产a级片免费看| 日本国产在线播放| 久操网在线观看| 欧美在线视频导航| 久久精品成人动漫| 欧美二区三区| 欧美伦理91i| 国产精品一区二区性色av| 精品免费久久久久久久| 国产一区喷水| 国产精品观看在线亚洲人成网| 蜜桃精品久久久久久久免费影院 | 久久久久久久久久av| 欧美亚洲国产日本| 国产精品海角社区在线观看| 国产真实乱子伦| 一区二区视频在线免费| 91精品国产自产在线老师啪| 手机看片日韩国产| 色天天综合狠狠色| 精品日本一区二区| 欧美激情视频一区| 国产男人精品视频| 亚洲综合日韩在线| 久久久性生活视频| 欧美性受xxxx黑人猛交88| 国产精品第2页| 91久久久久久久一区二区| 欧美一区二区大胆人体摄影专业网站| 久久久久久a亚洲欧洲aⅴ| 欧洲亚洲免费视频| 精品久久久久久中文字幕动漫| 国产精品一区二区在线| 日韩av播放器| 国产精品福利网| 91av一区二区三区| 黄色网页免费在线观看| 伦理中文字幕亚洲| av一区二区在线看| 欧美在线性视频| 一卡二卡三卡视频| 久久人人爽亚洲精品天堂| av一区二区三区免费观看| 欧美专区一二三| 亚洲在线一区二区| 久久久精品一区二区| 国产精品综合久久久久久| 日韩欧美99| 一区二区三区欧美在线| 国产成人精品视频免费看| www.九色.com| 国内精品小视频在线观看| 亚洲va韩国va欧美va精四季| 国产精品无码av在线播放| 91精品国产高清久久久久久91 | 欧美激情精品久久久久久蜜臀| 91免费版看片| 欧美凹凸一区二区三区视频| 亚洲一区二区三区在线观看视频| 精品国产美女在线| 91免费看蜜桃| 免费观看美女裸体网站| 日韩在线一级片| 国产99午夜精品一区二区三区| 国产成人91久久精品| 粉嫩精品一区二区三区在线观看 | 国产免费亚洲高清| 精品欧美一区二区三区久久久| 婷婷久久伊人| 中文字幕一区二区三区四区五区人| 久久视频国产精品免费视频在线| 久久久视频免费观看| 国产精品自拍合集| 加勒比成人在线| 欧美在线精品免播放器视频| 日韩av色综合| 五月天婷亚洲天综合网鲁鲁鲁| 国产精品观看在线亚洲人成网| 久久狠狠久久综合桃花| 91精品视频播放| 国产乱码精品一区二区三区中文| 国语自产精品视频在线看 | 久久精品2019中文字幕| 国产成人一区三区| 成人av电影免费| 国产日韩换脸av一区在线观看| 欧美人与性禽动交精品| 日韩精品视频一区二区在线观看| 无码人妻精品一区二区蜜桃百度 | 91精品国产91久久久久麻豆 主演 91精品国产91久久久久青草 | 狠狠干 狠狠操| 免费在线观看毛片网站| 欧美日韩视频免费在线观看| 日本电影一区二区三区| 日韩亚洲在线视频| 日本黄网免费一区二区精品| 午夜精品久久久久久久久久久久 | 91av一区二区三区| 97国产在线播放| 91精品久久久久久久久久 | 久久艳妇乳肉豪妇荡乳av| 91精品国产高清自在线看超| av免费观看网| 99亚洲国产精品| 69av在线播放| 久久av二区| 久久精品夜夜夜夜夜久久| 国产精品视频网站| 久久中文字幕国产| 国产999精品视频| 亚洲成人精品电影在线观看| 日本一区视频在线观看免费| 日本精品视频在线观看| 欧美 国产 综合| 国产日韩综合一区二区性色av|