国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代寫Tic-Tac-To: Markov Decision、代做java程序語言
代寫Tic-Tac-To: Markov Decision、代做java程序語言

時間:2024-12-14  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



Coursework 2 – Tic-Tac-To: Markov Decision
Processes & Reinforcement Learning (worth 25%
of your final mark)
Deadline: Thursday, 28th November 2024
How to Submit: To be submitted to GitLab (via git commit & push) – Commits are
timestamped: all commits after the deadline will be considered late.
Introduction
Coursework 2 is an individual assignment, where you will each implement Value
Iteration, Policy Iteration that plan/learn to play 3x3 Tic-Tac-Toe game. You will test
your agents against other rule-based agents that are provided. You can also play against
all the agents including your own agents to test them.
The Starter Code for this project is commented extensively to guide you, and includes
Javadoc under src/main/javadoc/ folder in the main project folder - you should read
these carefully to learn to use the classes. This is comprised of the files below.
You should get the Starter Code from GitLab: Follow the step by step instructions in
the document I have put together for you:
Open Canvas->F29AI -> Modules -> GitLab (and Git) Learning Materials (Videos and
Crib Sheets) -> Introduction to Eclipse, Git & GitLab.
If you are unfamiliar with git and/or GitLab I strongly suggest watching Rob
Stewart’s instructive videos on Canvas under the same module
Files you will edit & submit
ValueIterationAgent.java A Value Iteration agent for solving the Tic-Tac-Toe
game with an assumed MDP model.
PolicyIterationAgent.java A Policy Iteration agent for solving the Tic-Tac-Toe
game with an assumed MDP model.
QLearningAgent.java A q-learner, Reinforcement Learning agent for the
Tic-Tac-Toe game.
Files you should read & use but shouldn’t need to edit
Game.java The 3x3 Tic-Tac-Toe game implementation.
TTTMDP.java Defines the Tic-Tac-Toe MDP model
TTTEnvironment.java Defines the Tic-Tac-Toe Reinforcement Learning
environment
Agent.java Abstract class defining a general agent, which other
agents subclass.
HumanAgent.java Defines a human agent that uses the command line to
ask the user for the next move
RandomAgent.java Tic-Tac-Toe agent that plays randomly according to a
RandomPolicy
Move.java Defines a Tic-Tac-Toe game move
Outcome.java A transition outcome tuple (s,a,r,s’)
Policy.java An abstract class defining a policy – you should subclass
this to define your own policies
TransitionProb.java A tuple containing an Outcome object and a probability
of the Outcome occurring.
RandomPolicy.java A subclass of policy – it’s a random policy used by a
RandomAgent instance.
What to submit: You will fill in portions of ValueIterationAgent.java,
PolicyIterationAgent.java and QLearningAgent.java during the assignment.
Commit & push your changes to your fork of the repository. Do this frequently so
nothing is lost. There will soon be automatic unit tests written for this project, which
means that you’ll be able to see whether your code passes the tests, both locally, and on
GitLab. I will send an announcement once I’ve uploaded the tests.
PLEASE DO NOT UPLOAD YOUR SOLUTIONS TO A PUBLIC REPOSITORY. We have
spent a great deal of time writing the code & designing the coursework and want to be
able to reuse this coursework in the coming years.
Evaluation: Your code will be tested on GitLab for correctness using Maven & the Java
Unit Test framework. Please do not change the names of any provided functions or
classes within the code, or you will wreck the tests.
Mistakes in the code: If you are sure you have found a mistake in the current code let
me or the lab helpers know and we will fix it.
Plagiarism: While you are welcome to discuss the problem together in the labs, we will
be checking your code against other submissions in the class for logical redundancy. If
you copy someone else's code and submit it with minor changes, we will know. These
cheat detectors are quite hard to fool, so please don't try. We trust you all to submit
your own work only; please don't let us down. If you do, we will pursue the strongest
consequences with the school that are available to us.
Getting Help: You are not alone! If you find yourself stuck on something, ask in the
labs. You can ask for help on GitLab too – but it means you will need to commit & push
your code first: don’t worry, you won’t be judged until the deadline. It’s good practice to
commit & push your code frequently to the repository, even if it doesn’t work.
We want this coursework to be intellectually rewarding and fun.
MDPs & Reinforcement Learning
To get started, run Game.java without any parameters and you’ll be able to play the
RandomAgent using the command line. From within the top level, main project folder:
java –cp target/classes/ ticTacToe.Game
You should be able to win or draw easily against this agent. Not a very good agent!
You can control many aspects of the Game, but mainly which agents will play each
other. A full list of options is available by running:
java –cp target/classes/ ticTacToe.Game -h
Use the –x & -o options to specify the agents that you want to play the game. Your own
agents, namely, Value Iteration, Policy Iteration, and Q-Learning agents are denoted as
vi, pi & ql respectively, and can only play X in the game. This ignores the problem of
dealing with isomorphic state spaces (mapping x’s to o’s and o’s to x’s in this case). For
example if you want two RandomAgents to play out the game, you do it like this:
java target/classes/ ticTacToe.Game –x random –o
random
Look at the console output that accompanies playing the game. You will be told about
the rewards that the ‘X’ agent receives. The `O’ agent is always assumed to be part of
the environment.
Question 1 (6 points) Write a value iteration agent in ValueIterationAgent.java
which has been partially specified for you. Here you need to implement the iterate() &
extractPolicy() methods. The former should perform value iteration for a number of
steps (k steps – this is one of the fields of the class) and the latter should extract the
policy from the computed values.
Your value iteration agent is an offline planner, not a reinforcement agent, and so the
relevant training option is the number of iterations of value iteration it should run in its
initial planning phase – you can change this in ValueIterationAgent.java.
ValueIterationAgent constructs a TTTMDP object on construction – you do not need to
change this class, but use it in your value iteration implementation to generate the set of
next game states (the sPrimes), their associated probabilities & rewards when executing
a move from a particular game state (a Game object). You can do this using the provided
generateTransitions method in the TTTMDP class, which effectively gives you a
probability distribution over Outcomes.
Value iteration computes k-step estimates of the optimal values, Vk. You will see that the
the Value Function, Vk is stored as a java HashMap, from Game objects (states) to a
double value. The corresponding hashCode function for Game objects has been
implemented so you can safely use whole Game objects as keys in the HashMap.
Note: You may assume that 50 iterations is enough for convergence in this question.
Note: Unlike the MDPs in the class, in the CW2 implementation, your agent receives a
reward when entering a state – the reward simply depends on the target state, rather
than on source state, action, and target state. This means that there is no imagined
terminal state outside the game like in the lectures. Don’t worry – all the methods you
have learned are compatible with this setting.
Note: The O agent is modelled as part of the environment, so that once your agent
(X) takes an action, any next observed state would include O’s move. The agents need
NOT care about the intermediate game/state where only they have played and not yet
the opponent.
The following command loads your ValueIterationAgent, which will compute a policy
and executes it 10 times against the other agent which you specify, e.g. random, or
aggressive. The –s option specifies which agent goes first (X or O). By default, the X
agent goes first.
java target/classes/ ticTacToe.Game -x vi -o
random –s x
Question 2 (1 point): Test your Value Iteration Agent against each of the provided
agents 50 times and report on the results – how many games they won, lost & drew
against each of the other rule based agents. The rule based agents are: random,
aggressive, defensive.
This should take the form of a very short .pdf report named: vi-agent-report.pdf.
Commit this together with your code, and push to your fork.
Question 3 (6 point) Write a Policy Iteration agent in PolicyIterationAgent.java by
implementing the initRandomPolicy(), evaluatePolicy(), improvePolicy() &
train() methods. The evaluatePolicy() method should evaluate the current policy
(see your lecture notes), specified in the curPolicy field (which your
initRandomPolicy() initialized). The current values for the current policy should be
stored in the provided policyValues map. The improvePolicy() method performs the
Policy improvement step, and updates curPolicy.
Question 4 (1 point): As in Question 2, this time test your Policy Iteration Agent
against each of the provided agents 50 times and report on the results – how many
games they won, lost & drew. The other agents are: random, aggressive, defensive.
This should take the form of a very short .pdf report named: pi-agent-report.pdf.
Commit this together with your code, and push to your fork.
Questions 5 & 6 are on Reinforcement Learning:
Question 5 (5 points): Write a Q-Learning agent in QLearningAgent.java by
implementing the train() & extractPolicy()methods. Your agent should follow an
e-greedy policy during training (and only during training – during testing it should follow
the extracted policy). Your agent will need to train for many episodes before the qvalues converge. Although default values have been set/given in the code, you are
strongly encouraged to play round with the hyperparameters of q-learning: the learning
rate (a), number of episodes to train, as well as the epsilon in the e-greedy policy
followed during training.
Question 6 (1 point): Like the previous questions, test your Q-Learning Agent against
each of the provided agents 50 times and report on the results - how many games they
won, lost & drew. The other agents are: random, aggressive, defensive.
This should take the form of a very short .pdf report named: ql-agent-report.pdf.
Commit this together with your code, and push to your fork.
Javadoc: There is extensive comments in the code, Javadoc (under the folder doc/ in
the project folder) and inline. You should read these carefully to understand what is
going on, and what methods to call/use. They might also contain hints in the right
direction.
Value of Terminal States: you need to be careful about the values of terminal states -
terminal states are states where X has won, states where O has won, and states where
the game is a draw. The value of these game states - V(g) - should under all
circumstances and in all iterations be set to 0. Here’s why: to find the optimal value
of a state you will be looping over all possible actions from that state. For terminal states
this is empty, and might, depending on your implementation of finding the
maximum, lead to a result where you would be setting the value of the terminal state to
a very low negative value (e.g. Double.MIN_VALUE). To avoid this, for every game
state g that you are considering and calculating its optimal value, CHECK IF IT
IS A TERMINAL STATE (using g.isTerminal()); if it is, set its value to 0, and
move to the next game state (you can use the ‘continue;’ statement inside your
loop). Note that your agent would have already received its reward when
transitioning INTO that state, not out of it.
Testing your agent: If everything is working well, and you have the right parameters
(e.g. reward function) your agents should never lose.
You can play around with the reward values in the TTTMDP class – especially try
increasing or decreasing the negative losing reward. Increasing this negative reward (to
more negative numbers) would encourage your agent to prefer defensive moves to
attacking moves. This will change their behavior (both for Policy & Value iteration) and
should encourage your agent to never lose the game. Machine Learning isn't like
Mathematics with complete certainty - you almost always have to experiment to get the
parameters of your model right!

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp





 

掃一掃在手機打開當前頁
  • 上一篇:泰國駕照轉廣州駕照要怎么做(多長時間)
  • 下一篇:代寫JC4004編程、代做Python設計程序
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    欧美高清视频一区 | 不卡毛片在线看| 日韩福利视频| 91久久久在线| 制服诱惑一区| 国产私拍一区| 国产精品电影网| 男人添女人下部视频免费| 91av网站在线播放| 一区不卡视频| 99色这里只有精品| 中文字幕中文字幕一区三区| 国产免费一区二区| 中文网丁香综合网| 分分操这里只有精品| 欧美日韩xxxxx| 国产区亚洲区欧美区| 久久这里有精品视频| 国产主播在线看| 久久夜精品香蕉| 国产日韩欧美黄色| 欧美日韩第一页| 国产免费xxx| 欧美极品第一页| 高清一区二区三区四区五区| 亚洲免费在线精品一区| 91精品国产高清久久久久久91 | 国产欧美一区二区| 欧美精品激情在线观看| 国产欧美韩国高清| 在线播放豆国产99亚洲| 91精品久久久久久| 日韩视频在线免费看| 国产精品欧美一区二区三区奶水| 国内精久久久久久久久久人| 国产精品免费福利| 国产日韩一区二区在线观看| 亚洲最大福利网| 国产成人精品电影久久久| 欧美综合在线观看视频| 国产精品三区四区| 国产一区二区视频免费在线观看 | 茄子视频成人免费观看| 国产精品人人做人人爽| 国产综合欧美在线看| 久热精品视频在线观看| www.av蜜桃| 日韩美女视频中文字幕| 国产精品久久久久久久久久久不卡| 免费一级特黄毛片| 欧美日韩不卡合集视频| 91久热免费在线视频| 青草青草久热精品视频在线网站| 国产精品二区三区| 国产精品99久久久久久www | 水蜜桃亚洲精品| 国产精品手机播放| 国产免费观看高清视频| 日日摸天天爽天天爽视频| 久热精品视频在线| 国产九九精品视频| 日韩免费毛片视频| 中文字幕乱码人妻综合二区三区| 国产mv久久久| 国产日韩综合一区二区性色av| 亚洲日本无吗高清不卡| 国产成人免费电影| 国产乱人伦精品一区二区| 日韩一级在线免费观看| 国产精品久久久久久久久电影网| 国产精品一区而去| 热久久精品免费视频| 色综合久久久久久中文网| 久久综合九色综合久99| 精品视频在线观看一区二区| 亚洲精品成人久久久998| 久久久国产精品x99av| 国产另类第一区| 日韩欧美亚洲区| 中文字幕日本最新乱码视频| 久久精品视频在线观看| 99久久综合狠狠综合久久止| 精品人伦一区二区三区| 午夜精品一区二区三区视频免费看| 国产精品日韩欧美大师| 国产成人精品国内自产拍免费看| 国产欧美精品一区二区三区| 欧美一区二区影视| 色噜噜色狠狠狠狠狠综合色一| 欧美成人精品在线播放| 久久久久久噜噜噜久久久精品| 国产乱肥老妇国产一区二| 欧美在线一区视频| 日韩中文字幕亚洲精品欧美| 久久久久久999| 国产精品久久久久久久av大片| 国产suv精品一区二区三区88区| 国产伦精品一区二区三区高清| 欧美精品成人网| 日韩欧美不卡在线| 日日碰狠狠丁香久燥| 亚洲欧美日韩另类精品一区二区三区| 国产精品高潮呻吟视频| 国产精品无码免费专区午夜| 久久久久天天天天| 99精品在线免费视频| 国产日本欧美视频| 精品一区二区三区无码视频| 欧美亚州在线观看| 欧美一区二区影视| 青青a在线精品免费观看| 日韩精品久久一区二区| 色阁综合av| 午夜精品久久久久久久99热| 亚洲在线免费看| 亚洲一区二区三| 亚洲国产婷婷香蕉久久久久久99| 亚洲综合中文字幕在线观看| 中文字幕av导航| 一本大道熟女人妻中文字幕在线| 精品中文字幕在线2019| 久热精品视频在线免费观看| 欧美理论片在线观看| 国产精品大片wwwwww| 国产精品入口日韩视频大尺度| 国产精品视频xxx| 久久夜精品香蕉| 欧美激情亚洲视频| 亚洲不卡中文字幕无码| 岛国视频一区| 日本一区二区三区视频在线观看| 日本久久高清视频| 欧美在线不卡区| 好吊色欧美一区二区三区| 精品少妇人妻av一区二区| 国模无码视频一区二区三区| 国产日韩精品入口| 成人免费毛片播放| 91精品久久久久久久久久久久久久 | 国产精品初高中精品久久| 久久资源免费视频| 在线一区日本视频| 欧美一区二区三区四区在线观看地址 | 成人在线一区二区| 成人a在线视频| 国产精品99久久久久久久久久久久 | 久99九色视频在线观看| 亚洲一区精彩视频| 日韩av一区二区三区在线| 人体精品一二三区| 免费高清一区二区三区| 成人中文字幕av| 久久久久久久久久久久av| 国产精品久久亚洲7777| 自拍视频一区二区三区| 日本久久精品视频| 加勒比海盗1在线观看免费国语版 加勒比在线一区二区三区观看 | www.欧美三级电影.com| 操91在线视频| 懂色av粉嫩av蜜臀av| 欧美最大成人综合网| 国产精品综合网站| 国产成人avxxxxx在线看| 国产精品久久久精品| 亚洲图片都市激情| 欧洲成人在线观看| 国产美女99p| 久久精品五月婷婷| 不卡av日日日| 欧美一级中文字幕| 国产在线观看精品| 国产a级一级片| 国产99午夜精品一区二区三区| 欧美一区二区三区艳史| 免费看a级黄色片| 91极品视频在线| 国产精品爽爽ⅴa在线观看| 亚洲一区二区三区精品在线观看| 欧美xxxx黑人又粗又长密月| 91国内在线视频| 国产精品高潮呻吟久久av无限| 懂色av粉嫩av蜜臀av| 免费国产黄色网址| 久操手机在线视频| 亚洲日本理论电影| 国产在线精品一区免费香蕉| 久久久久久国产精品一区| 久久国产精品偷| 欧美精品一区二区三区久久 | 精品国产区在线| 欧美一区深夜视频| 国产精品99久久久久久白浆小说 | …久久精品99久久香蕉国产| 国产精品久久久久久亚洲影视 | 国产女主播av| 久久精品视频免费播放| 日韩av免费在线播放| 国产精品永久免费| 国产精品入口免费视频一| 日本视频一区二区在线观看|