狠狠综合久久久久综合网址-a毛片网站-欧美啊v在线观看-中文字幕久久熟女人妻av免费-无码av一区二区三区不卡-亚洲综合av色婷婷五月蜜臀-夜夜操天天摸-a级在线免费观看-三上悠亚91-国产丰满乱子伦无码专区-视频一区中文字幕-黑人大战欲求不满人妻-精品亚洲国产成人蜜臀av-男人你懂得-97超碰人人爽-五月丁香六月综合缴情在线

代寫DTS304TC、代做Java/c++程序語言

時間:2024-03-04  來源:  作者: 我要糾錯



XJTLU Entrepreneur College (Taicang) Cover Sheet
Module code and Title DTS304TC Machine Learning
School Title School of AI and Advanced Computing
Assignment Title Assessment Task 1
Submission Deadline 23:59, 24th March (Sunday), 2024
(China Time, GMT + 8)
Final Word Count N/A
If you agree to let the university use your work anonymously for teaching and 
learning purposes, please type “yes” here.
I certify that I have read and understood the University’s Policy for dealing with Plagiarism, Collusion 
and the Fabrication of Data (available on Learning Mall Online). With reference to this policy I certify 
that:
? My work does not contain any instances of plagiarism and/or collusion.
My work does not contain any fabricated data.
By uploading my assignment onto Learning Mall Online, I formally declare that all of the 
above information is true to the best of my knowledge and belief.
Scoring – For Tutor Use
Student ID 
Stage of Marking Marker
Code
Learning Outcomes Achieved (F/P/M/D)
(please modify as appropriate)
Final 
Score
A B C
1
st Marker – red pen
Moderation
– green pen
IM
Initials
The original mark has been accepted by the moderator (please circle 
as appropriate):
Y / N
Data entry and score calculation have been checked by another tutor 
(please circle):
Y
2
nd Marker if 
needed – green pen
For Academic Office Use Possible Academic Infringement (please tick as appropriate)
Date 
Received
Days 
late
Late 
Penalty
? Category A
Total Academic Infringement Penalty (A,B, 
C, D, E, Please modify where necessary) 
_____________________ 
? Category B
? Category C
? Category D
? Category E
DTS304TC Machine Learning
Coursework – Assessment Task 1
Submission deadline: TBD
Percentage in final mark: 50%
Learning outcomes assessed: 
A. Demonstrate a solid understanding of the theoretical issues related to problems that machine learning 
algorithms try to address.
B. Demonstrate understanding of properties of existing ML algorithms and new ones.
C. Apply ML algorithms for specific problems.
Individual/Group: Individual
Length: The assessment has a total of 4 questions which gives 100 marks. The submitted file must be in 
pdf format.
Late policy: 5% of the total marks available for the assessment shall be deducted from the assessment 
mark for each working day after the submission date, up to a maximum of five working days
Risks:
? Please read the coursework instructions and requirements carefully. Not following these instructions 
and requirements may result in loss of marks.
? The formal procedure for submitting coursework at XJTLU is strictly followed. Submission link on
Learning Mall will be provided in due course. The submission timestamp on Learning Mall will be 
used to check late submission.
__________________________________________________________________
Question 1: Coding Exercise - Heart Disease Classification with Machine Learning (50 Marks)
In this coding assessment, you are presented with the challenge of analyzing a dataset that contains 
patient demographics and health indicators to predict heart disease classifications. This entails solving a 
multi-class classification problem with five distinct categories, incorporating both categorical and 
numerical attributes.
Your initial task is to demonstrate proficiency in encoding categorical features and imputing missing 
values to prepare the dataset for training a basic classifier. Beyond these foundational techniques, you are 
invited to showcase your advanced skills. This may include hyperparameter tuning using sophisticated 
algorithms like the Asynchronous Successive Halving Algorithm (ASHA). You are also encouraged to 
implement strategies for outlier detection and handling, model ensembling, and addressing class 
imbalance to enhance your model's performance.
Moreover, an external test set without ground truth labels has been provided. Your classifier's 
performance will be evaluated based on this set, underscoring the importance of building a model with 
strong generalization capabilities.
The competencies you develop during this practical project are not only essential for successfully 
completing this assessment but are also highly valuable for your future pursuits in the field of data science.
Throughout this project, you are encouraged to utilize code that was covered during our Lab sessions, as 
well as other online resources for assistance. Please ensure that you provide proper citations and links to 
any external resources you employ in your work. However, the use of Generative AI for content 
generation (such as ChatGPT) is not permitted on all assessed coursework in this module.
Project Steps:
a) Feature Preprocessing (8 Marks)
 You are required to demonstrate four key preprocessing steps: loading the dataset, encoding 
categorical features, handling missing values, and dividing the dataset into training, validation, 
and test sets.
 It is crucial to consistently apply the same feature preprocessing steps—including encoding 
categorical features, handling missing values, and any other additional preprocessing or custom 
modifications you implement—across the training, validation, internal testing, and the externally 
provided testing datasets. For efficient processing, you may want to consider utilizing the 
sklearn.pipeline and sklearn.preprocessing library functions.
b) Training Classifiers (10 Marks)
 Train a logistic regression classifier with parameter tuned using grid search, and a random forest 
classifier with parameters tuned using Async Successive Halving Algorithm (ASHA) with 
ray[tune] libraries. You should optimize the model's AUC score during the hyperparameter tuning 
process.
 You should aim to optimize a composite score, which is the average of the classification accuracy 
and the macro-averaged F1 score. This objective encourages a balance between achieving high 
accuracy overall and ensuring that the classifier performs well across all classes in a balanced 
manner, which is especially important in multi-class classification scenarios where class 
imbalance might be a concern.
To clarify, your optimization goal is to maximize a composite accuracy metric defined as follows:
accuracy = 0.5 * (f1_score(gt, pred, average='macro') + accuracy_score(gt, pred))
In this formula, f1_score and accuracy_score refer to functions provided by the scikit-learn 
library, with f1_score being calculated with the 'macro' average to treat all classes equally.
 Ensure that you perform model adjustments, including hyperparameter tuning, on the validation 
set rather than the testing set to promote the best generalization of your model.
 We have included an illustrative example of how to implement the ASHA using the ray[tune] 
library. Please refer to the notebook DTS304TC_ASHA_with_Ray_Tune_Example.ipynb located 
in our project data folder for details.
c) Additional Tweaking and External Test Set Benchmark (19 Marks)
 You are encouraged to explore a variety of advanced techniques to improve your model's 
predictive power. 
1. Utilizing different classifiers, for example, XGBoost.
2. Implementing methods for outlier detection and treatment.
3. Creating model ensembles with varied validation splits.
4. Addressing issues of class imbalance.
5. Applying feature engineering strategies, such as creating composite attributes.
6. Implementing alternative validation splitting strategies, like cross-validation or stratified 
sampling, to enhance model tuning.
7. Additional innovative and valid methods not previously discussed.
You will be awarded 3 marks for successfully applying any one of these methods. Should you 
incorporate two or more of the aforementioned techniques, a total of 6 marks will be awarded.
Please include code comments that explain how you have implemented these additional 
techniques. Your code and accompanying commentary should explicitly state the rationale behind 
the selection of these supplementary strategies, as well as the underlying principles guiding your 
implementation. Moreover, it should detail any changes in performance, including improvements, 
if any, resulting from the application of these strategies. An additional 4 marks will be awarded 
for a clear and comprehensive explanation. To facilitate a streamlined review and grading 
process, please ensure that your comments and relevant code are placed within a separate code 
block in your Jupyter notebook, in a manner that is readily accessible for our evaluation.
 Additionally, utilize the entire dataset and the previously determined optimal hyperparameters
and classification pipeline to retrain your top-performing classifier. Then, apply this model to the 
features in 'dts304tc_a1_heart_disease_dataset_external_test.csv', which lacks true labels, to 
produce a set of predictive probability scores. Save these probabilistic scores in a table with two 
columns: the first column for patient IDs and the second for the output classification labels. 
Export this table to a file named external_test_results_[your_student_id].csv. Submit this file for 
evaluation. In the external evaluation conducted by us, your scores will be benchmarked against 
the performance of models developed by your classmates. You will receive four marks for 
successfully completing the prescribed classifier retraining and submission process. Additionally, 
your classifier's benchmark ranking—based on its performance relative to models developed by 
your peers—will be assigned five marks, contingent upon your standing in the ranking.
d) Result Analysis (8 Marks)
 For your best-performing model, compute critical performance metrics such as precision, recall, 
specificity, and the F1 score for each class. Additionally, generate the confusion matrix based on 
your internal test set. Ensure that the code for calculating these performance metrics, as well as 
the resulting statistics, are clearly displayed within your Jupyter notebook. For ease of review, 
position these elements towards the end of your notebook to provide direct access to your 
outcomes.
 Conduct a feature importance analysis by utilizing the feature importance scoring capabilities 
provided by your chosen classifier. What are the top three most important features for classifying 
this medicial condition? If your best performing model does not offer feature importance scoring, 
you may utilize an alternative model for this analysis. Present the results of the feature 
importance analysis within your Jupyter notebook using print statements or code comments. 
Place these relevant code and findings towards the end of the notebook to facilitate easy review of 
your results.
e) Project Submission Instructions (5 Marks - Important, Please Read!)
 Submit your Jupyter notebook in both .ipynb and .PDF formats. Ensure that in the PDF version, 
all model execution results, as well as your code annotations and analyses, are clearly visible. It is 
critical to maintain a well-organized structure in your Jupyter notebook, complemented by clear 
commentary using clean code practices. Your submission's readability and navigability are 
crucial; we cannot assign a score if we cannot understand or locate your code and results. You 
will receive 5 points for clarity in code structure and quality of code comments.
To maintain the readability of your code when converting your Jupyter notebook to a PDF, ensure 
that no single line of code extends beyond the printable page margin, thus preventing line 
truncation. If necessary, utilize line continuation characters or implicit line continuation within 
parentheses, brackets, or braces in Python to break up longer lines of code. After converting to 
PDF, thoroughly review the document to confirm that all code is displayed completely and that 
no line truncation has occurred.
If you have written supplementary code that is not contained within the Jupyter notebook, you 
must submit that as well to guarantee that your Jupyter notebook functions correctly. 
Nevertheless, our primary basis for grading will be the PDF version of your Jupyter notebook. 
Please ensure that all necessary code is included so that the notebook can be executed seamlessly,
and your results are reproducible.
 Submit the results of your external test as a file named external_test_[your_student_id].csv. This 
CSV file must be correctly formatted: the first column must contain patient IDs, and the second 
column must list your predicted classification labels. Any deviation from this format may result 
in the file being unprocessable by our grading software, and therefore unable to be scored.
Project Material Access Instructions
To obtain the complete set of materials for our project, including the dataset, code, and Jupyter notebook 
files, please use the links provided below:
 (OneDrive Link): https://1drv.ms/u/s!AoRfWDkanfAYnvcrXTKMGhNzRztf0g?e=JDwmbR
 (Baidu Drive Link): https://pan.baidu.com/s/1AXSRYO6ujTu1iNspdkIuUA?pwd=h4js
Download password: h4js
When prompted, use the following password to unlock the zip file: DTS304TC (please note that it is case?sensitive and should be entered in all capital letters).
Additionally, for ease of reference, the project's Jupyter notebooks have been appended to the end of this 
document.
Please note that the primary library dependencies for this project include pandas, scikit-learn, xgboost, 
and the ray library with the tune module enabled (ray[tune]).
Question 2: Ensemble Learning (18 marks):
Students are required not to use AI models, such as ChatGPT, for assistance with this question. 
You should give clear calculation steps and explain the relevant concepts using your own words.
(a) Majority Voting with Independent Classifiers (8 Marks)
1. Given individual classifiers C1, C2, and C3 with statistically independent error rates of 40%, 35%, 
and 35% respectively, calculate the accuracy of the majority voting ensemble composed of classifiers 
C1, C2, and C3. Provide the steps you use to determine the ensemble's accuracy, assuming the 
classifiers' decisions are statistically independent. (5 Marks)
(Hints: Calculate the ensemble 's accuracy by summing the probability that all classifiers are correct with 
the probabilities of exactly two classifiers being correct while the third is incorrect)
2. Point out the similarities and differences between the majority voting ensemble method and Random 
Forest, emphasizing the strategies employed by Random Forest to attain higher accuracy. (3 Marks)
(b) AdaBoost Algorithm (10 Marks)
Consider the process of creating an ensemble of decision stumps, referred to as ????, through the standard 
AdaBoost method.
The diagram above shows several two-dimensional labeled points along with the initial decision stump 
we've chosen. This stump gives out binary values and makes its decisions based on a single variable (the 
cut-off). In the diagram, there's a tiny arrow perpendicular to the classifier's boundary that shows where 
the classifier predicts a +1. Initially, every point has the same weight.
1. Identify all the points in the above diagram that will have their weights increased after adding the 
initial decision stump (adjustments to AdaBoost sample weights after the initial stump is used) (2 
marks)
2. On the same diagram, draw another decision stump that could be considered in the next round of 
boosting. Include the boundary where it makes its decision and indicate which side will result in a +1 
prediction. (2 marks)
3. Will the second basic classifier likely get a larger importance score in the group compared to the first 
one? To put it another way, will ??2 > ??1? Just a short explanation is needed (Calculations are not 
required). (3 marks)
4. Suppose you have trained two models on the same set of data: one with AdaBoost and another with a 
Random Forest approach. The AdaBoost model does better on the training data than the Random 
Forest model. However, when tested on new, unseen data, the Random Forest model performs better. 
What could explain this difference in performance? What can be done to make the AdaBoost model 
perform better? (3 marks)
Question 3: K-Means and GMM Clustering (7 marks)
Students are required not to use AI models, such as ChatGPT, for assistance with this question. 
You should give clear analysis steps and explain the relevant concepts using your own words.
1. Reflect on the provided data for training and analyze the outcomes of K-Means and GMM techniques. 
Can we anticipate identical centroids from these clustering methods? Please state your reasoning. (3
marks)
2. Determine which of the given cluster assignments could be a result of applying K-means clustering, 
and which could originate from GMM clustering, providing an explanation for your reasoning. (4 
marks)
Question 4 - Reinforcement Learning (25 marks)
Students are required not to use AI models, such as ChatGPT, for assistance with this question. 
You should give clear analysis steps and explain the relevant concepts using your own words.
1. Describe the five key components of reinforcement learning using the graph below, explain each of 
the components and their relationships. (10 marks.)
import gym
import numpy as np
env = gym.make('CartPole-v1')
state_space_size = env.observation_space.shape[0]
action_space_size = env.action_space.n
q_table = np.zeros((state_space_size, action_space_size))
learning_rate = 0.1
discount_factor = 0.99
epsilon = 0.1
num_episodes = 1000
# Q-learning algorithm
for episode in range(num_episodes):
 state = env.reset()
 done = False
 while not done:
 # Exploration-exploitation strategy
 if np.random.uniform(0, 1) < epsilon:
 action = env.action_space.sample() # Explore
 else:
 action = np.argmax(q_table[state, :]) # Exploit
 # Take action and observe the next state and reward
 next_state, reward, done, _ = env.step(action)
 # Update Q-value using the Q-learning formula
 q_table[state, action] = (1 - learning_rate) * q_table[state, action] +
 learning_rate * (reward + discount_factor * np.max(q_table[next_state, :]))
 # Move to the next state
 state = next_state
# Testing the learned policy
total_reward = 0
state = env.reset()
while True:
 action = np.argmax(q_table[state, :])
 state, reward, done, _ = env.step(action)
 total_reward += reward
 if done:
 break
print(f"Total reward: {total_reward}")
2. The questions below refer to the above code example:
a) What is the significance of the exploration-exploitation strategy in reinforcement learning, and how is 
it implemented in the code? (5 marks)
b) How would you change this code to use deep learning? You don’t need to write the code, only 
describe the major changes you would make and explain the advantage of deep learning approaches 
over the Q-table. (5 marks)
c) Describe the current reward function for Cartpole. Design a reward function of your own and explain 
your reward function. (5 marks)

.ip… 1/2
# to student: this is an code snapshot showing how to use ray tune framework to 
# you are responsible for completing the code, debugging and making sure it work
import sklearn.datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn import metrics
import os
from ray.tune.schedulers import ASHAScheduler
from sklearn.model_selection import train_test_split
from ray import tune
from ray import train
def train_rf(config: dict, data=None):
 # Initialize the RandomForestClassifier with the configuration
 classifier = RandomForestClassifier(
 n_estimators=config["n_estimators"],
 max_depth=config["max_depth"],
 min_samples_split=config["min_samples_split"],
 min_samples_leaf=config["min_samples_leaf"],
 class_weight="balanced",
 random_state=42
 )
 
 # Fit the RandomForestClassifier on the training data
 X_train = data[0]
 y_train = data[1]
 X_validation = data[2]
 y_validation = data[3]
 # To Be filled: Train your Random Forest Classifier here
 # To Be filled: Evaluate the classifier on the validation set here and get e
 
 # Send the accuracy to Ray Tune
 train.report({'accuracy': accuracy})
# note that X_train, y_train, X_validation, y_validation are your training and v
tunable_function = tune.with_parameters(train_rf, data=[X_train, y_train, X_vali
def tune_random_forest(smoke_test=False):
 # Define the search space for hyperparameters
 search_space = {
 "n_estimators": # setup your search space here
 "max_depth": # setup your search space here
 "min_samples_split": # setup your search space here
 "min_samples_leaf": # setup your search space here
 }
 
 # Define the scheduler for early stopping
 scheduler = ASHAScheduler(
 max_t= # setup your ASHA parameter here,
 grace_period=# setup your ASHA parameter here,
 reduction_factor=# setup your ASHA parameter here
 )
 
 # Set up the tuner
 tuner = tune.Tuner(
 tunable_function,
In [ ]:

.ip… 2/2
 tune_config=tune.TuneConfig(
 metric="accuracy",
 mode="max",
 scheduler=scheduler,
 num_samples=1 if smoke_test else 200,
 ),
 param_space=search_space,
 )
 
 # Execute the tuning process
 results = tuner.fit()
 
 return results
# Run the tuning function
best_results = tune_random_forest(smoke_test=False)
best_trial = best_results.get_best_result(metric="accuracy", mode="max")
# Get the best trial's hyperparameters
best_params = best_trial.config
# Print the best hyperparameters
print("Best hyperparameters found were: ", best_params)
# Initialize a new RandomForestClassifier with the best hyperparameters
best_rf = RandomForestClassifier(**best_params, random_state=42)
# The remaining part of your code continues ....

請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 



 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:AcF633代做、Python設計編程代寫
  • 下一篇:SCC312代做、代寫Java編程語言
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網下載 歐冠直播 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    狠狠综合久久久久综合网址-a毛片网站-欧美啊v在线观看-中文字幕久久熟女人妻av免费-无码av一区二区三区不卡-亚洲综合av色婷婷五月蜜臀-夜夜操天天摸-a级在线免费观看-三上悠亚91-国产丰满乱子伦无码专区-视频一区中文字幕-黑人大战欲求不满人妻-精品亚洲国产成人蜜臀av-男人你懂得-97超碰人人爽-五月丁香六月综合缴情在线
  • <dl id="akume"></dl>
  • <noscript id="akume"><object id="akume"></object></noscript>
  • <nav id="akume"><dl id="akume"></dl></nav>
  • <rt id="akume"></rt>
    <dl id="akume"><acronym id="akume"></acronym></dl><dl id="akume"><xmp id="akume"></xmp></dl>
    一区二区三区视频在线观看免费| 欧美日韩在线成人| 欧美在线aaa| 日韩欧美精品在线观看视频| 黄色激情在线视频| 伊人成人免费视频| 亚洲国产成人va在线观看麻豆| 国产一区二区视频免费在线观看 | 久久福利一区二区| 国产 国语对白 露脸| 一本色道久久88亚洲精品综合| 欧美精品 - 色网| 91热视频在线观看| 午夜久久久久久久久久久| 国内精品国产三级国产aⅴ久| 日韩精品aaa| 小说区视频区图片区| 黄色一级视频播放| www.夜夜爱| 人人干视频在线| 日本www在线播放| 国产嫩草在线观看| 不卡中文字幕在线观看| 捷克做爰xxxⅹ性视频| 国产a级黄色大片| 波多野结衣综合网| 免费看污黄网站| 精品国产三级a∨在线| 男女猛烈激情xx00免费视频| 免费观看成人网| 日日干夜夜操s8| 欧洲精品视频在线| 久久国产精品视频在线观看| 成人在线激情网| 国产欧美自拍视频| 日本成人中文字幕在线| 伊人成人免费视频| 欧美日韩国产精品激情在线播放| 黄色手机在线视频| 一女被多男玩喷潮视频| 男人的天堂最新网址| 亚洲色婷婷久久精品av蜜桃| 麻豆传传媒久久久爱| 欧美日韩中文字幕在线播放| 在线免费视频a| 人人妻人人澡人人爽欧美一区双| 不要播放器的av网站| 国产在线xxxx| 欧美午夜精品理论片| 激情婷婷综合网| 精品少妇在线视频| 四虎精品欧美一区二区免费| 人妻无码视频一区二区三区| 免费看日b视频| 波多野结衣免费观看| 欧美大尺度做爰床戏| 欧美大片在线播放| 日韩激情视频一区二区| 国产乱女淫av麻豆国产| 国产一线二线三线在线观看| 欧美成人一区二区在线观看| 台湾无码一区二区| 美国av在线播放| 在线观看国产一级片| 一区二区在线播放视频| 欧美一级片中文字幕| 国产黄色一级网站| 精品视频免费在线播放| 国产免费裸体视频| 成人免费看片'免费看| 女同性恋一区二区| av动漫在线播放| 国产精品av免费观看| 精品视频在线观看一区二区| 五月天激情图片| 91免费视频黄| 999久久欧美人妻一区二区| 天堂8在线天堂资源bt| 日韩伦理在线免费观看| 精品久久一二三| 国产精品无码专区av在线播放 | 成人小视频在线看| 搡女人真爽免费午夜网站| 四季av一区二区| 中文字幕第一页在线视频| 永久av免费在线观看| 穿情趣内衣被c到高潮视频| 成人短视频在线观看免费| 国产毛片久久久久久国产毛片| 日韩网站在线免费观看| 国产一区亚洲二区三区| 青青草久久伊人| 妞干网视频在线观看| 女人另类性混交zo| 久久久九九九热| 精品少妇一区二区三区在线| 99视频精品免费| 三级黄色片免费观看| 黄页网站大全在线观看| av免费网站观看| 亚洲国产精品影视| 国产裸体舞一区二区三区| 色91精品久久久久久久久| 国产精品久久久久久久久电影网| 黄色国产精品视频| 成年人三级视频| 日本999视频| 精品国产一区三区| 一本之道在线视频| 日本久久精品一区二区| 女人色极品影院| 亚洲图片 自拍偷拍| 精品中文字幕av| 18黄暴禁片在线观看| 亚洲欧美aaa| 性欧美极品xxxx欧美一区二区| av日韩在线看| 国产探花在线观看视频| 午夜dv内射一区二区| 精品国产一二三四区| 国产91在线亚洲| 国产精品jizz在线观看老狼| 人妻无码视频一区二区三区| 国产一二三在线视频| 在线观看污视频| 日本三级中文字幕在线观看| 小早川怜子一区二区三区| 欧美国产日韩在线播放| av动漫在线看| 秋霞无码一区二区| xxxx18hd亚洲hd捆绑| 国产一级爱c视频| 国产91沈先生在线播放| 乱子伦一区二区| 2022中文字幕| 久久久久久久久久久久久国产| 午夜影院免费观看视频| 亚洲欧美手机在线| 天天综合成人网| 中文字幕综合在线观看| 国产成年人在线观看| 亚洲欧美天堂在线| 日本中文字幕在线不卡| 五月天综合婷婷| www成人免费| 国产精品50p| 我看黄色一级片| 欧美又黄又嫩大片a级| 天堂av在线中文| 国产a级片网站| 国产女大学生av| 久久午夜夜伦鲁鲁一区二区| 免费看涩涩视频| 日韩最新中文字幕| av免费观看大全| 日本va中文字幕| 免费在线精品视频| 丰满爆乳一区二区三区| 国产喷水theporn| 欧美日韩激情四射| 99久久国产宗和精品1上映| 夜夜爽久久精品91| 可以在线看的av网站| 午夜激情在线观看视频| 日本在线视频www色| 香港三级韩国三级日本三级| 精品999在线| 国产婷婷一区二区三区| 成人日韩在线视频| 男女视频网站在线观看| jizz18女人| 国产一区二区视频播放| 亚洲最大天堂网| 国产精品专区在线| 免费观看黄色的网站| 久久久久久香蕉| 国产精品久久久久久久乖乖| 在线观看高清免费视频| 欧美一级爱爱视频| 中文字幕色网站| 在线视频日韩一区 | 免费特级黄色片| 天天操精品视频| 91av俱乐部| 国产极品粉嫩福利姬萌白酱 | 日韩精品在线中文字幕| www.午夜av| 黄色永久免费网站| 精品久久久久久无码国产| 999久久欧美人妻一区二区| 中文国产在线观看| 欧美特黄aaa| 亚洲一区二区三区四区五区| 日韩在线视频在线观看| 蜜臀av无码一区二区三区| 黄色网址在线免费看| 三区视频在线观看| gai在线观看免费高清| 亚洲精品久久久中文字幕| 亚洲国产高清av|