国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做CSOCMP5328、代寫Python編程設計

時間:2024-05-19  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



CSOCMP5**8 - Advanced Machine Learning 
Bias and Fairness in Large Language Models (LLMs) 
 
This is a group assignment, 2 to 3 students only. This is NOT an individual assignment. It is worth 
25% of your total mark. 
 
1. Introduction 
Generative AI models have garnered significant attention and adoption in various domains due to 
their remarkable output quality. Nevertheless, these models, reliant on massive, internet-sourced 
datasets, exhibit vulnerabilities that sparked a debate on important ethical concerns, especially 
around fairness, pertaining to the amplification of human biases and a potential decline in 
trustworthiness. 
 
This assignment aims to investigate methods for bias mitigation within generative AI models and 
provide your own method to mitigate the bias in the LLMs. While there are two main critical areas: 
Text-to-Text and Text-to-Image where fairness is paramount, our focus in this assignment is 
specifically on the Text-to-Text problem. 
● Text-to-Text using Large Language Models (LLMs): This area encompasses prominent 
language models such as Llama-2, BERT, T5, GPT-2/3, and Chat-GPT, and examines the 
potential for these models to generate biased textual content and its implications. 
1.1 Common biased categories 
To contextualise our investigation, we have identified several common categories of bias that 
may manifest within generative AI models: 
● Gender and Occupations: One significant aspect involves exploring biases related to 
gender disparities in various professions. By analysing the output of generative models, we 
can discern whether these models tend to associate specific careers more with one gender 
over another, thus potentially perpetuating occupational stereotypes, for example: 
○ Text-to-Text: GPT-2 may generate text that reinforces traditional gender 
stereotypes. For example, it might associate caregiving with women and leadership 
with men, perpetuating societal biases. Example: "She is a nurturing mother, 
always putting her family first." 
○ Text-to-Image: The results generated by Stable Diffusion for the prompt “A photo 
of a firefighter.”  
 
● Race / Ethnicity: Another critical dimension involves assessing biases related to race and 
ethnicity: 
○ Text-to-Text: GPT-2 may generate text that perpetuates racial stereotypes or 
generalisations about specific racial or ethnic groups, for example: "Asian people 
are naturally good at math." or the model may generate content that oversimplifies 
or misrepresents the cultures and traditions of certain racial or ethnic groups. for 
example: "All Latinos are passionate dancers." 
○ Text-to-Image: The bias results for “intelligent person” using Image Search 
Engines. 
 
 
Addressing bias and fairness in generative AI represents a complex and ongoing challenge. 
Researchers and developers are actively engaged in devising a range of techniques aimed at bias 
detection and mitigation. These approaches include the diversification of training data sources, the 
development of ethical guidelines for AI development, and the creation of algorithms designed 
explicitly to identify and rectify bias within AI-generated outputs. 
1.2 Safety 
Generative AI is used in intentionally harmful ways. This includes misusing generative AI to 
generate child sexual exploitation and abuse material based on images of children, or generating 
sexual content that appears to show a real adult and then blackmailing them by threatening to 
distribute it over the internet. Generative AI can also be used to manipulate and abuse people by 
impersonating human conversation convincingly and responding in a highly personalised manner, 
often resembling genuine human responses. 
Note: The resultant figures from Stable Diffusion are only presented to demonstrate the bias. This 
assignment is only for "text-based bias and fairness" in LLMs. 
 
2. A Guide to Using the Datasets 
To effectively investigate and assess bias within generative AI models for Text-to-Text, it is crucial 
to select appropriate datasets that reflect real-world scenarios and challenges. Depending on your 
chosen focus, you may need to find specific datasets for your area of investigation e.g., healthcare, 
sports, entertainment datasets etc. We provide some examples below however you are free to choose any dataset not listed. There are several datasets used for LLM bias evaluation [1], you 
may refer to this link for more information: https://github.com/i-gallegos/Fair-LLM-Benchmark. 
Those datasets are only used for evaluation, do not train your model with these datasets. 
 
Depending on your research objectives, select training datasets that align with your area of 
investigation. 
● Access the chosen datasets through official sources, research papers, or relevant 
repositories. 
● Download the training dataset (s) to your local environment. Ensure that you adhere to any 
licensing or usage terms associated with the dataset(s). Depending on the debiasing 
techniques employed, retraining the model may be necessary. Commonly utilised datasets 
for training LLMs such as Common Crawl, Wikipedia, BookCorpus, PubMed, arXiv, 
ImageNet, COCO, VQA, Flickr30k, etc. 
● Pre-process the dataset as necessary for compatibility with your chosen de-biasing (i.e., 
enabling fairness) methods in generative AI model. Consider factors like label imbalance 
among various demographic groups in the training data, as this can lead to bias. One 
common method for addressing bias is counterfactual data augmentation (CDA) [1] to 
balance labels. Additionally, other pre-processing techniques involve adjusting harmful 
information in the data or eliminating potentially biased texts. Identify and handle harmful 
text subsets using different methods to ensure a fairer training corpus. 
● Integrate the pre-processed dataset(s) into your code for training and evaluation. Ensure 
that you have the appropriate data loading and pre-processing routines in place to work 
seamlessly with generative AI models. 
 
Remember that data pre-processing and formatting are crucial steps in ensuring that the datasets 
are ready for input into your generative AI models. Additionally, make sure to document your 
dataset selection and pre-processing steps thoroughly in your research report for transparency and 
reproducibility. 
 
3. Performance Evaluations 
Most fairness metrics for LLMs can be categorised by what they use from the model such as the 
embeddings, probabilities, or generated text, including: 
● Embedding-based metrics: Using the dense vector representations to measure bias, which 
are typically contextual sentence embeddings. 
● Probability-based metrics: Using the model-assigned probabilities to estimate bias (e.g., to 
score text pairs or answer multiple-choice questions). 
● Generated text-based metrics: Using the model-generated text conditioned on a prompt 
(e.g., to measure co-occurrence patterns or compare outputs generated from perturbed 
prompts). 
 
 
 4. Tasks 
Your main tasks are: 
 
● Research: Conduct in-depth research to identify various methods for addressing bias in 
Generative AI. Ensure you understand the theoretical foundations and practical 
implementation of these methods. Provide comprehensive comparison of various methods 
based on the conducted evaluations and discuss their contributions, evaluation methods, 
strengths, and weaknesses (this will help in the Related Work section of the report). 
 
● Proposed Mathematical Model: 
○ Chose a language model such as Llama-2, BERT, T5, GPT-2/3, and Chat-GPT you 
would like to remove the bias. Write mathematical model for your proposed 
approach, represent training datasets as a database or feature sets etc., preprocessing
 steps that you have taken on the training datasets, the objective and 
optimisation method that you employed, training model using LLM, and evaluation 
metrics to evaluate your model. Write comprehensive table to show all the notations 
along with their descriptions. 
○ Write algorithms to show all the steps of the proposed approach, including system 
initialisation, training/testing, bias evaluations, results evolutions, or any other 
steps that show the implementation of your proposed approach. 
○ Show schematic representation of your proposed approach. 
● Code Development: 
○ Implement the selected bias mitigation methods, based on the proposed 
mathematical model. 
○ Train the model using selected LLM with the pre-processed dataset (if needed). 
○ Evaluate the bias, show experimental evaluations of various metrics, generate their 
corresponding figures. 
○ The code (including interfacing for training model using LLM and results 
evaluations) must be written in Python 3. You are allowed to use any external 
libraries for performance comparisons; however, you need to provide details on 
how the libraries were setup and how evaluation metrics were used, in the Appendix 
section. 
 
● Evaluation: 
○ Perform the chosen model before applying debiasing techniques on evaluation 
datasets and show if the bias exists via various prompts, these results are termed as 
the baseline. 
○ Pre-process the dataset and train the model using LLM using your proposed 
method. Evaluate the performance of the trained model via various prompts to 
demonstrate that you have addressed the bias. Note that, some debiasing techniques 
may not require retraining the model. 
○ Compare the performance of proposed method with the baseline. 
○ Evaluate other performance evaluation metrics, e.g., utility, training time, average, 
St. Dev etc. Note that some of the evaluation metrics might not be applicable in 
your proposed scenario, hence, you must actively think of various evaluation 
metrics to determine the applicability of your model; comprehensive literature survey will help you find how authors evaluated the bias and enabled fairness of 
generative AI models. 
○ Important: Please note that this is our understanding of how to carry out this study 
and evaluations i.e., show bias of chosen model via prompts à apply chosen 
debiasing technique (for example, pre-process the dataset (to remove imbalance 
labels and re-train model with pre-processed dataset) à via prompts, show that you 
have addressed the bias à compare baseline with proposed approach. If you think 
that this might not work, you need to come up with other techniques. 
 
● Conclude: 
○ Conclude your findings and show the strengths and weaknesses of your proposed 
approach. 
○ Provide hypothetical comparison of your approach with other approaches in the 
literature. This comparison could be based on various performance metrics. 
○ Provide future research directions about how to mitigate those weaknesses. 
○ Provide comprehensive directions on how your proposed model could be 
generalised and applicable for various application scenarios e.g., social media 
applications, stock markets, health or sports analytics etc. 
 
Note: Above steps are written with quite details. If you still have any ambiguity about those steps 
or implementation/technical questions or understanding of the problem scenario, then please do 
your own research, share your findings on the Ed so that other students could also get idea of how 
to deal with specific problem steps. Furthermore, please also post your concerns/questions no Ed 
under the “Assignment 2” thread, our teaching team will be happy to share their experience and 
suggestions. Please note that this is an open research assignment, use your own creativity and come 
up with the understanding of this problem scenario and solution. 
 
4.1 Report 
The report should be organised similar to research papers, and should contain at least the following 
sections: 
 
Abstract: 
• Clearly introduces the topic scenario and its significance. 
• Provides a concise summary of the proposed evaluation method. 
• Provide the results from various evaluation metrics. 
• Conclude your contributions and discuss its applicability in the real-world scenario. 
 
Introduction: 
• Clearly introduces the problem of bias in generative AI and its importance. 
• Provides a clear and detailed overview of the proposed methods. 
• Write contributions in detail e.g., pre-processing, experimental setup, mathematical 
model, proposed evaluation method and metrics, various steps to achieve evaluate your 
results. 
• Provide discussion on the key results and show the organisation of your report at the end 
of this section. 
 Related Work: 
• Provides a comprehensive review of related debiasing and fairness methods. 
• Discusses the advantages and disadvantages of the reviewed methods in the literature. 
• Demonstrates understanding of the existing literature. 
• Provide a summarised table of the existing works and show their contributions, evaluation 
method, strengths, and weaknesses of existing work. 
 
Proposed Method: 
• Explains the theoretical foundations of the proposed solution effectively. 
• Describes the details of debiasing methods clearly, including the objective function. 
• Presents the algorithmic representation of the proposed solution comprehensively. 
• Show schematic representation of your proposed approach. 
 
Experiments/Evaluations: 
• Provides a clear description of the experimental setup, including datasets, algorithm 
evaluations, and metrics. 
• Presents experimental results effectively, with appropriate figures. 
• Conducts a thorough analysis and comparison of baseline and proposed method. 
• Provides detailed insights on the results. 
 
Conclusion: 
• Effectively summarises the methods and results. 
• Provides valuable insights or suggestions for future work. 
• Provide strengths and weaknesses of your work, furthermore, provide future directions. 
 
References: 
• Lists all references, cited in the report. 
• Formats all references consistently and correctly. 
 
Appendix: 
• Provide instructions on how to run your code. 
• Provide additional/supporting figures or experimental evaluations. 
 
Note: Please follow the provided latex format for the report on Canvas. 
 
5. Submission guidelines 
1. Go to Canvas and upload the following files/folders compressed together as a zip file. 
● Report (a PDF file) 
The report should include all member’s details (student IDs and names). 
● Code (a folder): 
○ Algorithm (a sub-folder): Your code (could be multiple files or a project) ○ Input data (a sub-folder) Empty. Please do NOT include the dataset in the zip file 
as they are large. Please provide detailed instructions on how the datasets are used 
and how to download them. We will copy the dataset to the input folder when we 
test the code. 
2. A plagiarism checker will be used, both for code and report. 
3. A penalty of MINUS 20 percent marks (−20%) per day after the due date. The maximum 
delay is 5 (five) days, after that assignments will not be accepted. 
 
Note: Only one student needs to submit the zip file which must be renamed as student ID numbers 
of all group members separated by underscores, which should contain all the relevant files and 
report. E.g., “xxxxxxxx_xxxxxxxx_xxxxxxxx.zip”. Please write names and email addresses of 
each member in the report. 
 
 
Example References: 
1. Bias and Fairness in Large Language Models: A Survey. Isabel O. Gallegos, Ryan A. 
Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, 
Ruiyi Zhang, Nesreen K. Ahmed. https://arxiv.org/abs/2309.00770 
2. A Survey on Fairness in Large Language Models. Yingji Li, Mengnan Du, Rui Song, Xin 
Wang, Ying Wang. https://arxiv.org/abs/2308.10149 
3. Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness. Felix Friedrich, 
Manuel Brack, Lukas Struppek, Dominik Hintersdorf, Patrick Schramowski, Sasha 
Luccioni, Kristian Kersting. https://arxiv.org/abs/2302.10893 
4. Stable Bias: Analyzing Societal Representations in Diffusion Models. Alexandra Sasha 
Luccioni, Christopher Akiki, Margaret Mitchell, Yacine Jernite. 
https://arxiv.org/abs/2303.11408 
 
 6. Marking Rubrics 
Criterion Marks Comments 
 
Coding (30 Marks): 
• Coding will be run to see whether it works properly and 
produces the figures and all evaluations demonstrated in 
the report. 
 
Abstract (5 Marks): 
• Clearly introduces the topic scenario and its 
significance. (1 Marks) 
• Provides a concise summary of the proposed evaluation 
method. (2 Marks) 
• Provide the results from various evaluation metrics. (1 
Marks) 
• Conclude your contributions and discuss its 
applicability in the real-world scenario. (1 Marks) 
 
Introduction (10 Marks): 
• Clearly introduces the problem of bias in generative AI 
and its importance. (3 Marks) 
• Provides a clear and detailed overview of the proposed 
methods. (3 Marks) 
• Write contributions in detail e.g., pre-processing, 
experimental setup, mathematical model, proposed 
evaluation method and metrics, various steps to achieve 
evaluate your results. (2 Marks) 
• Provide discussion on the key results and show the 
organisation of your report at the end of this section. (2 
Marks) 
 
Related Work (10 Marks): 
• Provides a comprehensive review of related debiasing 
and fairness methods. (3 Marks) 
• Discusses the advantages and disadvantages of the 
reviewed methods in the literature. (3 Marks) 
• Demonstrates understanding of the existing literature. (2 
Marks) 
• Provide a summarised table of the existing works and 
show their contributions, evaluation method, strengths, 
and weaknesses of existing work. (2 Marks) 
 
 
 
  
Proposed Method (20 Marks): 
• Explains the theoretical foundations of the proposed 
solution effectively. (7 Marks) 
• Describes the details of debiasing methods clearly, 
including the objective function. (4 Marks) 
• Presents the algorithmic representation of the proposed 
solution comprehensively. (7 Marks) 
• Shows schematic representation of proposed approach. 
(2 Marks) 
 
Experiments/Evaluations (20 Marks): 
• Provides a clear description of the experimental setup, 
including datasets, algorithm evaluations, and metrics. 
(7 Marks) 
• Presents experimental results effectively, with 
appropriate figures. (7 Marks) 
• Conducts a thorough analysis and comparison of 
baseline and proposed method. (4 Marks) 
• Provides detailed insights on the results. (4 Marks) 
 
Conclusion (5 Marks): 
• Effectively summarises the methods and results. (1 
Marks) 
• Provides valuable insights or suggestions for future 
work. (2 Marks) 
• Provide strengths and weaknesses of your work, 
furthermore, provide future directions. (2 Marks) 
 
References: 
• Lists all references, cited in the report. 
• Formats all references consistently and correctly. 
 
Overall Presentation (10 Marks): 
• Maintains a clear and logical structure throughout the 
report. (5 Marks) 
• Demonstrates excellent writing quality, including clarity 
and coherence. (3 Marks) 
• Adheres to formatting and citation guidelines 
consistently. (2 Marks) 
 
Total: 100 Marks 


 請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp









 

掃一掃在手機打開當前頁
  • 上一篇:菲律賓移民北美的條件(移民材料是什么)
  • 下一篇:代做CSC 4120、代寫Python程序語言
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    流體仿真外包多少錢_專業CFD分析代做_友商科技CAE仿真
    流體仿真外包多少錢_專業CFD分析代做_友商科
    CAE仿真分析代做公司 CFD流體仿真服務 管路流場仿真外包
    CAE仿真分析代做公司 CFD流體仿真服務 管路
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真技術服務
    流體CFD仿真分析_代做咨詢服務_Fluent 仿真
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲勞振動
    結構仿真分析服務_CAE代做咨詢外包_剛強度疲
    流體cfd仿真分析服務 7類仿真分析代做服務40個行業
    流體cfd仿真分析服務 7類仿真分析代做服務4
    超全面的拼多多電商運營技巧,多多開團助手,多多出評軟件徽y1698861
    超全面的拼多多電商運營技巧,多多開團助手
    CAE有限元仿真分析團隊,2026仿真代做咨詢服務平臺
    CAE有限元仿真分析團隊,2026仿真代做咨詢服
    釘釘簽到打卡位置修改神器,2026怎么修改定位在范圍內
    釘釘簽到打卡位置修改神器,2026怎么修改定
  • 短信驗證碼 寵物飼養 十大衛浴品牌排行 suno 豆包網頁版入口 wps 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    国产人妻人伦精品_欧美一区二区三区图_亚洲欧洲久久_日韩美女av在线免费观看
    人人妻人人添人人爽欧美一区| 蜜臀久久99精品久久久久久宅男 | 精品人伦一区二区三区| 欧美精品在线观看91| 久久久亚洲国产天美传媒修理工| 欧美不卡福利| 日韩一区二区高清视频| 久久在线免费观看视频| 久久精品国产精品亚洲色婷婷| 国产日韩中文在线| 日韩黄色片在线| 亚洲资源在线看| 国产精品久久久久av| 国产精品10p综合二区| 国产一区欧美二区三区| 热久久免费视频精品| 亚洲伊人成综合成人网| 久久夜精品香蕉| 国产成人艳妇aa视频在线| 国产片侵犯亲女视频播放| 日本在线观看a| 中文字幕第一页亚洲| 国产精品视频午夜| 久久福利电影| 8090成年在线看片午夜| 热久久这里只有| 色偷偷9999www| 国产精品一区二区三区免费观看| 色999五月色| 久久成年人视频| 国产成人亚洲精品无码h在线| 国产精品久久国产| 久久免费精品日本久久中文字幕| 国产综合久久久久久| 欧美日韩电影一区二区三区| 日日摸日日碰夜夜爽无码| 中文精品无码中文字幕无码专区| 日韩有码免费视频| 俺去了亚洲欧美日韩| 99视频在线| 免费不卡av在线| 欧美一区二区大胆人体摄影专业网站| 韩国v欧美v日本v亚洲| 欧美成人免费va影院高清| 久久精品视频在线观看| 日韩最新免费不卡| 久久久久久久网站| 日韩在线免费观看视频| 91干在线观看| 国产精品 欧美在线| 91|九色|视频| 97国产在线播放| 91精品在线观看视频| 成人动漫在线视频| 成人免费在线网址| 国产精品一区免费观看| 国产精品一区二区三区在线观 | 成人国产精品一区二区| 国产区一区二区三区| 国产综合在线视频| 国产一区免费视频| 国产亚洲天堂网| 国产三级精品在线不卡| 国产欧美亚洲视频| 国产麻花豆剧传媒精品mv在线 | 欧美日韩精品在线一区二区| 日韩美女av在线免费观看| 日韩av不卡播放| 欧洲亚洲一区二区三区四区五区| 奇米四色中文综合久久| 欧美国产视频一区| 国产一区二区四区| 国产日韩欧美另类| 国产精品一区二区欧美黑人喷潮水 | 国内精品久久影院| 国产一区二区三区色淫影院 | 久久亚洲综合网| 久久久噜噜噜久噜久久| 国产成人精品综合久久久| 国产精品久久久久久av下载红粉 | 亚洲 中文字幕 日韩 无码| 日本一区二区在线视频观看| 热门国产精品亚洲第一区在线 | 黄色一级二级三级| 国产视频一区二区三区四区| 国产精品一区免费观看| 99久久国产综合精品五月天喷水| 国产精品678| 国产成人午夜视频网址| 久久五月天色综合| 亚洲综合中文字幕在线| 日韩精品免费播放| 黄色一级二级三级| 成人精品小视频| 久久精品国产理论片免费| 国产精品视频500部| 欧美精品videos性欧美| 日韩av一级大片| 黄色污污在线观看| 成人国产精品久久久| 久久9精品区-无套内射无码| 久久韩国免费视频| 亚洲资源在线看| 欧洲精品一区二区三区久久| 国产日韩av高清| 国产不卡在线观看| 欧美xxxx做受欧美| 婷婷久久伊人| 蜜桃麻豆91| 久久日韩精品| 精品国产电影| 日韩福利在线| 国产乱人伦精品一区二区三区| 久久精品国产一区二区三区日韩| 久久成人精品一区二区三区| 日本伊人精品一区二区三区介绍| 欧美日韩在线观看一区| 97碰在线观看| 国产精品国三级国产av| 欧美一区2区三区4区公司二百| 国产一区二区在线免费| 久久www视频| 亚洲自拍的二区三区| 黄色片视频在线播放| 国产成人精品日本亚洲专区61| 精品国产一区二区三| 欧美专区在线视频| 91精品综合视频| 欧美精品在线极品| 欧美最猛性xxxxx亚洲精品| 99热在线播放| 国产精品久久久久久久久久尿| 亚洲福利av| 国产免费一区二区视频| 国产精品日韩一区二区免费视频| 少妇高清精品毛片在线视频 | 日韩av免费在线| 国产欧美中文字幕| 日韩色av导航| 婷婷五月综合缴情在线视频| 国产欧美久久一区二区| 国产精品人人妻人人爽人人牛| 欧美一级特黄aaaaaa在线看片| 国产免费一区二区三区| 国产精品久久亚洲| 欧美亚洲另类在线一区二区三区| 久久久免费观看视频| 亚洲综合自拍一区| 国产精品一区二区在线观看| 国产精品久久久久久免费观看| 日本精品www| 91av免费观看91av精品在线| 中文字幕在线中文字幕日亚韩一区| 国模极品一区二区三区| 国产精品视频福利| 欧洲精品国产| 色老头一区二区三区在线观看| 日本一区二区三区视频在线观看| 99福利在线观看| 精品国产一区二区三区四区精华| 国语自产精品视频在线看一大j8| 日韩三级成人av网| 欧美中日韩在线| 国产成人精品一区二区三区福利| 日韩国产欧美精品| 久久久久久久一区二区| 日韩视频在线免费看| 九色一区二区| 人妻熟女一二三区夜夜爱| 色婷婷综合久久久久| 日韩免费观看网站| 日日狠狠久久偷偷四色综合免费| 日韩av在线第一页| 日韩在线视频观看正片免费网站| 欧美一区二区三区成人久久片| 久久久视频免费观看| 欧美一区二区三区精美影视| 久久综合狠狠综合久久综青草| 欧美一区二区大胆人体摄影专业网站 | 国内免费精品永久在线视频| 国产精品免费一区豆花| 男女猛烈激情xx00免费视频| 久久久精品视频在线观看| 欧美高清性xxxxhd| 国产精品久久久久久av福利软件| 每日在线更新av| 美女国内精品自产拍在线播放| 国产伦精品一区二区三区免| 一区高清视频| 国产精品91久久| 欧洲精品在线一区| 国产精品久在线观看| 国产免费观看久久黄| 一级特黄妇女高潮| 国产mv久久久| 精品一区在线播放| 真实国产乱子伦对白视频| 久久久女人电视剧免费播放下载| 日韩欧美视频一区二区| 国产精品国产亚洲精品看不卡15|