国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP9417代做、代寫Python語言編程

時間:2024-07-17  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning
Homework 3: MLEs and Kernels
Introduction In this homework we first continue our exploration of bias, variance and MSE of estimators.
We will show that MLE estimators are not unnecessarily unbiased, which might affect their performance
in small samples. We then delve into kernel methods: first by kernelizing a popular algorithm used in
unsupervised learning, known as K-means. We then look at Kernel SVMs and compare them to fitting
linear SVMs with feature transforms.
Points Allocation There are a total of 28 marks.
• Question 1 a): 2 marks
• Question 1 b): 2 marks
• Question 1 c): 4 marks
• Question 2 a): 1 mark
• Question 2 b): 1 mark
• Question 2 c): 2 marks
• Question 2 d): 2 marks
• Question 2 e): 2 marks
• Question 2 f): 3 marks
• Question 2 g): 2 marks
• Question 3 a): 1 mark
• Question 3 b): 1 mark
• Question 3 c): 1 mark
• Question 3 d): 1 mark
• Question 3 e): 3 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution
in the form of text and requested plots. For some questions you will be requested to provide screen
shots of code used to generate your answer — only include these when they are explicitly asked for.
1• .py file(s) containing all code you used for the project, which should be provided in a separate .zip
file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your
solutions clear. Start each question on a new page if necessary.
• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from
developing your code in a notebook and then copying it into a .py file though, or using a tool such as
nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions
before posting new questions. Please do some basic research online before posting questions. Please
only post clarification questions. Any questions deemed to be fishing for answers will be ignored
and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for
announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the
course. General discussion of the problems is fine, but you must write out your own solution and
acknowledge if you discussed any of the problems in your submission (including their name(s) and
zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework questions
 on these site is equivalent to plagiarism and will result in a case of academic misconduct.
• You may not use SymPy or any other symbolic programming toolkits to answer the derivation questions.
 This will result in an automatic grade of zero for the relevant question. You must do the
derivations manually.
When and Where to Submit
• Due date: Week 8, Monday July 15th, 2024 by 5pm. Please note that the forum will not be actively
monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For example,
 if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be made on Moodle, no exceptions.
Page 2Question 1. Maximum Likielihood Estimators and their Bias
Let X1, . . . , Xn
i.i.d. ∼ N(µ, σ
2
). Recall that in Tutorial 2 we showed that the MLE estimators of µ, σ
2 were
µˆMLE and σˆ
2
MLE where
µˆMLE = X, and σˆ
2
MLE =
 1
n
Xn
i=1
(Xi − X)
2
.
In this question, we will explore these estimators in more depth.
(a) Find the bias and variance of both µˆMLE and σˆ
2
MLE
. Hint: You may use without proof the fact that
var
 
1
σ
2
Xn
i=1
(Xi − X)
2
!
= 2(n − 1)
What to submit: the bias and variance of the estimators, along with your working.
(b) Your friend tells you that they have a much better estimator for σ.
Discuss whether this estimator is better or worse than the MLE estimator:.
Be sure to include a detailed analysis of the bias and variance of both estimators, and describe what
happens to each of these quantities (for each of the estimators) as the sample size n increases (use
plots). For your plots, you can assume that σ = 1.
What to submit: the bias and variance of the new estimator. A plot comparing the bias of both estimators as
a function of the sample size n, a plot comparing the variance of both estimators as a function of the sample
size n, use labels/legends in your plots. A copy of the code used here in solutions.py
(c) Compute and then plot the MSE of the two estimators considered in the previous part. For your
plots, you can assume that σ = 1. Provide some discussion as to which estimator is better (according
 to their MSE), and what happens as the sample size n gets bigger. What to submit: the MSEs of
the two variance estimators. A plot comparing the MSEs of the estimators as a function of the sample size
n, and some commentary. Use labels/legends in your plots. A copy of the code used here in solutions.py
Question 2. A look at clustering algorithms
Note: Using an existing/online implementation of the algorithms described in this question will
result in a grade of zero. You may use code from the course with reference.
The K-means algorithm is the simplest and most intuitive clustering algorithm available. The algorithm
takes two inputs: the (unlabeled) data X1, . . . , Xn and a desired number of clusters K. The goal is to
identify K groupings (which we refer to as clusters), with each group containing a subset of the original
data points. Points that are deemed similar/close to each other will be assigned to the same grouping.
Algorithmically, given a set number of iterations T, we do the following:
1. Initialization: start with initial set of K-means (cluster centers): µ
(a) Consider the following data-set of n = 5 points in R
(2)
2 by hand. Be sure to
show your working.
What to submit: your cluster centers and any working, either typed or handwritten.
(b) Your friend tells you that they are working on a clustering problem at work. You ask for more
details and they tell you they have an unlabelled dataset with p = 10000 features and they ran
K-means clustering using Euclidean distance. They identified 52 clusters and managed to define
labellings for these clusters based on their expert domain knowledge. What do you think about the
usage of K-means here? Do you have any criticisms?
What to submit: some commentary.
(c) Consider the data and random clustering generated using the following code snippet:
1 import matplotlib.pyplot as plt
2 import numpy as np
3 from sklearn import datasets
4
5 X, y = datasets.make_circles(n_samples=200, factor=0.4, noise=0.04, random_state=13)
6 colors = np.array([’orange’, ’blue’])
7
8 np.random.seed(123)
9 random_labeling = np.random.choice([0,1], size=X.shape[0], )
10 plt.scatter(X[:, 0], X[:, 1], s=20, color=colors[random_labeling])
11 plt.title("Randomly Labelled Points")
12 plt.savefig("Randomly_Labeled.png")
13 plt.show()
14
The random clustering plot is displayed here:
1Recall that for a set S, |S| denotes its cardinality. For example, if S = {4, 9, 1} then |S| = 3.
2The notation in the summation here means we are summing over all points belonging to the k-th cluster at iteration t, i.e. C
(t)
k
.
Page 4Implement K-means clustering from scratch on this dataset. Initialize the following two cluster
centers:
and run for 10 iterations. In your answer, provide a plot of your final clustering (after 10 iterations)
similar to the randomly labeled plot, except with your computed labels in place of random labelling.
Do you think K-means does a good job on this data? Provide some discussion on what you observe.
What to submit: some commentary, a single plot, a screen shot of your code and a copy of your code
in your .py file.
(d) You decide to extend your implementation by considering a feature transformation which maps
2-dimensional points (x1, x2) into 3-dimensional points of the form. Run your
K-means algorithm (for 10 iterations) on the transformed data with cluster centers:
Note for reference that the nearest mean step of the algorithm is now:
ki = arg min
k∈{1,...,K}
. In your answer, provide a plot of your final clustering using the
code provided in (c) as a template. Provide some discussion on what you observe. What to submit:
a single plot, a screen shot of your code and a copy of your code in your .py file, some commentary.
(e) You recall (from lectures perhaps) that directly applying a feature transformation to the data can
be computationally intractable, and can be avoided if we instead write the algorithm in terms of
Page 5a function h that satisfies: h(x, x0
) = hφ(x), φ(x
0
)i. Show that the nearest mean step in (1) can be
re-written as:
ki = arg min
k∈{1,...,K}
h(Xi
, Xi) + T1 + T2

,
where T1 and T2 are two separate terms that may depend on C
(t−1)
k
, h(Xi
, Xj ) and h(Xj , X`) for
Xj , X` ∈ C
(t−1)
k
. The expressions should not depend on φ. What to submit: your full working.
(f) With your answer to the previous part, you design a new algorithm: Given data X1, . . . , Xn, the
number of clusters K, and the number of iterations T:
1. Initialization: start with initial set of K clusters: C
(0)
1
, C(0)
2
, . . . , C(0)
K .
2. For t = 1, 2, 3, . . . , T :
• For i = 1, 2, . . . , n: Solve
ki = arg min
k∈{1,...,K}
h(Xi
, Xi) + T1 + T2

.
• For k = 1, . . . , K, set
C
(t)
k = {Xi such that ki = k}.
The goal of this question is to implement this new algorithm from scratch using the same data
generated in part (c). In your implementation, you will run the algorithm two times: first with the
function:
h1(x, x0
) = (1 + hx, x0
i),
and then with the function
h2(x, x0
) = (1 + hx, x0
i)
2
.
For your initialization (both times), use the provided initial clusters, which can be loaded
in by running inital clusters = np.load(’init clusters.npy’). Run your code for at
most 10 iterations, and provide two plots, one for h1 and another for h2. Discuss your results for
the two functions. What to submit: two plots, your discussion, a screen shot of your code and a copy of
your code in your .py file.
(g) The initializations of the algorithms above were chosen very specifically, both in part (d) and (f).
Investigate different choices of intializations for your implemented algorithms. Do your results
look similar, better or worse? Comment on the pros/cons of your algorithm relative to K-means,
and more generally as a clustering algorithm. For full credit, you need to provide justification in
the form of a rigorous mathematical argument and/or empirical demonstration. What to submit:
your commentary.
Question 3. Kernel Power
Consider the following 2-dimensional data-set, where y denotes the class of each point.
index x1 x2 y
1 1 0 -1
2 0 1 -1
3 0 -1 -1
4 -1 0 +1
5 0 2 +1
6 0 -2 +1
7 -2 0 +1
Page 6Throughout this question, you may use any desired packages to answer the questions.
(a) Use the transformation x = (x1, x2) 7→ (φ1(x), φ2(x)) where φ1(x) = 2x
2
2 − 4x1 + 1 and φ2(x) =
x
2
1 − 2x2 − 3. What is the equation of the best separating hyper-plane in the new feature space?
Provide a plot with the data set and hyperplane clearly shown.
What to submit: a single plot, the equation of the separating hyperplane, a screen shot of your code, a copy
of your code in your .py file for this question.
(b) You wish to fit a hard margin SVM using the SVC class in sklearn. However, the SVC class only
fits soft margin SVMs. Explain how one may still effectively fit a hard margin SVM using the SVC
class. What to submit: some commentary.
(c) Fit a hard margin linear SVM to the transformed data-set in part (a). What are the estimated
values of (α1, . . . , α7). Based on this, which points are the support vectors? What error does your
computed SVM achieve?
What to submit: the indices of your identified support vectors, the train error of your SVM, the computed
α’s (rounded to 3 d.p.), a screen shot of your code, a copy of your code in your .py file for this question.
(d) Consider now the kernel k(x, z) = (2 + x
>z)
2
. Run a hard-margin kernel SVM on the original (untransformed)
data given in the table at the start of the question. What are the estimated values of
(α1, . . . , α7). Based on this, which points are the support vectors? What error does your computed
SVM achieve?
What to submit: the indices of your identified support vectors, the train error of your SVM, the computed
α’s (rounded to 3 d.p.), a screen shot of your code, a copy of your code in your .py file for this question.
(e) Provide a detailed argument explaining your results in parts (i), (ii) and (iii). Your argument
should explain the similarities and differences in the answers found. In particular, is your answer
in (iii) worse than in (ii)? Why? To get full marks, be as detailed as possible, and use mathematical
arguments or extra plots if necessary.
What to submit: some commentary and/or plots. If you use any code here, provide a screen shot of your code,
and a copy of your code in your .py file for this question.
Page 7

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




 

掃一掃在手機打開當前頁
  • 上一篇:代寫AT2- Accounting Systems、代做Java
  • 下一篇:菲律賓降簽多久可以出境(9G工簽降簽所需的材料)
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    黄色片网址在线观看| 久久综合九色综合88i| 不卡一卡2卡3卡4卡精品在| 国产精品美乳一区二区免费| 亚洲精品国产精品国自产观看| 欧美精品亚洲| 久久精品国产91精品亚洲| 日本最新高清不卡中文字幕| 91.com在线| 亚洲国产一区二区三区在线| av片在线免费| 亚洲一区二区三区毛片| 国产精品一区在线播放| 九色精品免费永久在线| 国产美女作爱全过程免费视频| 欧美精品亚州精品| 国产视色精品亚洲一区二区| 美日韩精品免费观看视频| 国产欧美欧洲在线观看| 欧美成人精品一区| 国产日韩欧美亚洲一区| 在线一区高清| 91av中文字幕| 日韩精品久久久毛片一区二区| 日韩在线国产精品| 欧美精品一区二区三区久久 | 少妇高清精品毛片在线视频| 91精品国产综合久久香蕉最新版| 无码人妻aⅴ一区二区三区日本| 91av网站在线播放| 日本成人中文字幕在线| 色狠狠久久aa北条麻妃| 欧美日韩国产精品激情在线播放| 国产精品我不卡| 国产深夜男女无套内射| 亚洲一区二区不卡视频| 久久久亚洲国产| 欧美日韩高清免费| 九色精品免费永久在线| 国产精品2018| 欧美高清性xxxxhd| 精品久久久无码人妻字幂| 欧美亚洲成人免费| 精品免费国产| 久久婷婷五月综合色国产香蕉| 污视频在线免费观看一区二区三区 | 国产精品精品久久久| 国产女主播自拍| 亚洲国产欧美不卡在线观看| 久久久久在线观看| 国产欧美一区二区三区久久| 性欧美激情精品| 国产精品日韩欧美一区二区 | 91久久中文字幕| 日韩精品xxxx| 久久亚洲春色中文字幕| 91av国产在线| 男人添女人下部视频免费| 在线精品日韩| 久久精品99国产精品酒店日本| 国产日韩中文字幕在线| 日本一区二区三区视频在线播放| 国产精品久久久久久久久婷婷| 91精品国产乱码久久久久久久久| 欧美最猛性xxxxx亚洲精品| 美女黄色丝袜一区| 日韩一区二区欧美| 国产日韩精品入口| 午夜精品一区二区三区在线视频 | 日本精品久久久久影院| 欧美成人中文字幕| 久久成人资源| 99精品视频播放| 激情小说网站亚洲综合网| 成人做爰www免费看视频网站| 国产精品大片wwwwww| 久久福利电影| 97欧美精品一区二区三区| 免费看黄色a级片| 日韩欧美激情一区二区| 亚洲最大av在线| 国产精品女人网站| 国产二区视频在线| 粉嫩精品一区二区三区在线观看 | 中文字幕在线乱| 日韩视频亚洲视频| 91精品国产乱码久久久久久久久 | 伊人久久青草| 国产精品久久久久久久久粉嫩av| 91精品国产乱码久久久久久久久 | 日韩视频免费在线播放| 亚洲欧洲一区二区福利| 欧美成人精品一区| 国产成人久久777777| 久久久影视精品| 福利视频久久| 国产一区二区三区高清视频| 欧美日韩视频在线一区二区观看视频| 欧美一区二区高清在线观看| 一区二区三区久久网| 精品蜜桃传媒| 久久亚洲精品国产亚洲老地址| 日韩中文字幕在线观看| 国产v综合v亚洲欧美久久| 国产精品aaa| 97免费视频观看| 国产精品一二三在线| 国产欧美在线观看| 国产性生交xxxxx免费| 免费一级特黄毛片| 黄色片免费在线观看视频| 欧美少妇一区二区三区| 欧美久久综合性欧美| 欧美一区二视频在线免费观看| 日本在线播放一区| 日韩xxxx视频| 人妻少妇精品无码专区二区| 日韩精品手机在线观看| 欧洲精品在线一区| 欧美日韩喷水| 精品视频免费在线播放| 国产女女做受ⅹxx高潮| 国产免费观看久久黄| 国产精品亚洲不卡a| 99在线视频首页| 91精品国产91久久久久久吃药| 91精品黄色| 久久精品国产综合精品| 久久久久中文字幕| 久久久久久久成人| 国产精品三级网站| 麻豆国产精品va在线观看不卡| 久久97久久97精品免视看| 亚洲精品在线视频观看| 日产日韩在线亚洲欧美| 青青青国产精品一区二区| 红桃av在线播放| 国产区欧美区日韩区| 97成人在线视频| 久久久久久欧美精品色一二三四| 精品国产一区二区三区在线观看 | 人妻有码中文字幕| 国内视频一区| 国产欧美在线观看| 国产精品18毛片一区二区| 三级精品视频久久久久| 久久中文字幕在线视频| 亚洲国产一区二区精品视频| 日韩欧美一区二区视频在线播放| 欧美精品久久久久久久免费 | 国产精品自在线| 国产极品精品在线观看| 久久精品国产亚洲精品| 久久久久久高潮国产精品视| 色综合久久久久无码专区| 欧美性视频在线播放| 国产色综合一区二区三区| 久久综合久久久| 国产精品久久久久久久久久免费 | 另类美女黄大片| 亚洲不卡中文字幕| 欧美综合77777色婷婷| 国产日韩一区二区三区| 国产a级全部精品| 精品国产一二| 日韩免费在线观看av| 国产一区二区在线免费视频 | 久久久久久久国产精品视频| 国产精品成人免费电影| 无码人妻h动漫| 国模私拍一区二区三区| caoporn国产精品免费公开| 日韩视频―中文字幕| 一区二区三区观看| 欧美日韩午夜爽爽| 99国产精品白浆在线观看免费 | 精品免费日产一区一区三区免费| 日韩在线国产| 国产女大学生av| 久久精品国产免费观看| 亚洲尤物视频网| 国内自拍中文字幕| 久久av一区二区三区漫画| 久久亚洲精品国产亚洲老地址| 午夜精品美女久久久久av福利| 精品网站在线看| 久久久久久久久久久久久久国产 | 国产中文一区二区| 久久久久资源| 亚洲精品欧美日韩| 精品一区久久久| 久久国产欧美精品| 亚洲综合自拍一区| 精品一区二区三区毛片| 色狠狠久久aa北条麻妃| 性色av一区二区三区| 成人精品视频在线| 欧美激情欧美激情在线五月| 国模视频一区二区三区| www.久久色.com|