国产一区二区三区香蕉-2020国产成人精品视频-欧美日韩亚洲三区-www.91桃色-最美情侣中文第5季免费观看-久草毛片-国产成人精品av-男女猛烈拍拍拍无挡视频-中文字幕看片-色视频欧美一区二区三区-久久久久久久久久影院-一级a爱片久久毛片-精品久久久久久无码中文字幕一区-欧美色图网站-无码色偷偷亚洲国内自拍-国产一区在线免费观看

ACS61012代寫、MATLAB編程語言代做

時間:2024-03-12  來源:  作者: 我要糾錯



Coursework for ACS61012 “Machine Vision”
The purpose of the lab sessions is to give you both theoretical and practical skills in machine
vision and especially in image enhancement, image understanding and video processing.
Machine vision is essential for a number of areas - autonomous systems, including robotics,
Unmanned Aerial Vehicles (UAVs), intelligent transportation systems, medical diagnostics,
surveillance, augmented reality and virtual reality systems.
The first labs focus on performing operations on images such as reading, writing calculating
image histograms, flipping images and extracting the important colour and edges image
features. You will become familiar how to use these features for the purposes of object
segmentation (separation of static and moving objects) and for the next high-level tasks of
stereo vision, object detection, classification, tracking and behaviour analysis. These are
inherent steps of semi-supervised and unsupervised systems where the involvement of the
human operators reduces to minimum or is excluded.
Required for Each Subtask
Task 1: Introduction to Machine Vision
For the report from Task 1, you need to present results with:
From Lab Session 1 – Part I
● The Red, Green, Blue (RGB) image histogram of your own picture and analysis the
histogram. Several pictures are provided, if you wish to use one of them. Alternatively,
you could work with a picture of your choice. The original picture needs to be shown
as well. Please discuss the results. For instance, what is the differences between the
histograms? What do we learn from the visualised red, green and blue components
of the image histogram?
Files: Lab 1 - Part I - Introduction to Images and Videos.zip and Images.zip. You
can work with one of the provided images from Images.zip or with your own image.
From Lab Session 1 – Part II
● Results with different edge detection algorithms, e.g. Sobel, Prewitt and comment on
their accuracy with different parameters (threshold, and different types of noise
especially). Include the visualisation and your conclusions about static objects
segmentation using edge detection (steps 9-11 with Sobel, Canny and {Prewitt
operators)) in your report. Visualise the results and draw conclusions.
 [8 marks equally distributed between part I and part II]
Task 2: Optical Flow Estimation Algorithm
For the report, you need to:
● Find corner points and apply the optical flow estimation algorithm.
(use file Lab 2.zip – image Gingerbread Man). Presents results for the ‘Gingerbread
Man’ tasks and visualise the results
 [4 marks]
● Track a single point with the optical flow approach (file: Lab 2.zip – the red square
image). Visualise the trajectory on the last frame and the ground truth track of ‘Red
Square’ tasks.
2
● Compute and visualise the root mean square error of the trajectory estimated over
the video frames by the optical flow algorithm. Compare the estimates with the exact
coordinates given in the file called groundtruth. You need to include the results only
with one corner. Give the equation for the root-mean square error. Analyse the
results and make conclusions about the accuracy of the method based on the root
mean square error.
 [8 marks]
Task 3: Automatic Detection of Moving Objects in a Sequence of Video Frames
You are designing algorithms for automatic vehicular traffic surveillance. As part of this
task, you need to apply two types of approaches: the basic frame differencing approach
and the Gaussian mixture approach to detect moving objects.
Part I: with the frame differencing approach
● Apply the frame differencing approach (Lab 3.zip file)
For the report, you need to present:
● Image results of the accomplished tasks
● Analyse the algorithms performance when you vary the detection threshold.
[5 marks]
Part II: with the Gaussian mixture approach
For the report, you need to present:
● Results for the algorithm performance when you vary parameters such as number
of Gaussian components, initialisation parameters and the threshold for decision
making
● Detection results of the moving objects, show snapshots of images.
● Analyse all results – how does the change of the threshold and number of Gaussian
components affect the detection of the moving objects?
[5 marks]
Task 4: Robot Treasure Hunting
 A robot is given a task to search and find “treasures” in imagery
data. There are three tasks: easy , medium and
difficult.
. The starting point of the robot search is where the red arrow is. For the
medium case the blue fish is the only treasure, for the difficult case the clove and sun are
3
“treasures” that need to be found. Ideally, one algorithm needs to be able to find the
“treasures” from all images, although a solution with separate algorithms is acceptable.
For Task 4, in the report, you need to present results with:
● The three different images (easy, medium and difficult showing the path of finding
“the treasure”.
● Include the intermediate steps of your results in your report, e.g. of the binarisation
of the images and the value of the threshold that you found or any other algorithm
that you propose for the solution of the tasks.
● Explain your solution, present your algorithms and the related MATLAB code.
● Include the brief description of main idea of your functions in your report and the
actual code of the functions in an Appendix of your report.
In the guidance for the labs, one possible solution is discussed, but others are available.
Creativity is welcome in this task and if you have different solutions, they are welcome.
Here 8 marks are given for the easy task, 10 for the medium task and 12 for the most
difficult task.
[30 marks]
Task 5. Image Classification with a Convolutional Neural Network
1. Provide your classification results with the CNN, demonstrating its accuracy and
analyse them in your report.
[2 marks]
2. Calculate the Precision, Recall, and the F1 score functions characterising further the
CNN performance.
[6 marks]
3. Improve the CNN classification results. Please explain how you have achieved the
improvements.
[12 marks]
4. Discuss ethics aspects in Computer Vision tasks such as image classification,
detection and segmentation. Consider ethics in broad aspects – what are the positives
when Ethics is considered. What ethics challenges do ethics poses and how could they
be reduced and mitigated? In your answer you need to include aspects of Equality,
Diversity and Inclusion (EDI).
[10 marks]
Finally, the quality of writing and presentation style are assessed. These include the
clarity, conciseness, structure, logical flow, figures, tables, and the use of references.
[10 marks]
4
Guidance on the Course Work Submission
You need to submit your report and code that you have written to accomplish the tasks.
There are two separate submission links on Blackboard.
Report and Code Submission
There are two submission links on Blackboard: 1) for your course work report in a pdf
format and 2) for the requested code in a zipped file.
A Well-written Report Contains:
● A title page, including your ID number, course name, etc., followed by a content page.
● The main part: description of the tasks and how they are performed, including results
from all subtasks. For instance: “This report presents results on reading and writing
images in MATLAB. Next, the study of different edge detection algorithms is presented
and their sensitivity to different parameters…” You are requested to present in
Appendices the MATLAB code that you have written to obtain these results. A very
important part of your report is the analysis of the results. For instance, what does the
image histogram tell you? How can you characterise the results? Are they accurate? Is
there a lot of noise?
● Conclusions describe briefly what has been done, with a summary of the main
results.
● Appendix: Present and describe briefly in an Appendix the code only for tasks 2-
5. Add comments to your code to make it understandable. Provide the full code
as one compressed file, in the separate submission link given for it.
● Cite all references and materials used. Adding references demonstrates additional
independent study. Write with own style and words to minimise and avoid similarities.
Every student needs to write own independent report.
● Please name the files with your report and code for the submission on Blackboard by
adding your ID card registration number, e.g. CW_Report_1101133888 and
CW_Code_1101133888.
The advisable maximum number of words is 4000.
Submission Deadline: Week 10 of the spring semester, Sunday midnight
5
Guidance to Accomplish the Tasks
Lab Session 1 - Part I: Introduction to Image Processing
In this lab you will learn how to perform basic operations on images of different types, e.g.
how to read them, convert them from one format to another, calculate image histograms and
analyse them.
Background Knowledge
A digital image is composed of pixels which can be thought of as small dots on the screen.
We know that all numeric calculations in MATLAB are performed using double (64-bit)
floating-point numbers, so this is also a frequent data class encountered in image
processing. Some of the most common formats used in image processing are presented in
Tables 1 and 2 given below.
All MATLAB functions work with double arrays. To reduce memory requirements, MATLAB
supports storing image data in arrays of class uint8 and uint16. The data in these arrays is
stored as 8-bit or 16-bit unsigned integers. Such arrays require respectively, one eighth or
one-fourth as much memory as data in double arrays.
Table 1. Data classes and their ranges
Most of the mathematic operations are not supported for types uint8 and uint16. It is
therefore required to convert to double for operations and back to uint8/16 for storage,
display and printing.
Table 2. Numeric formats used in image processing
Image Types
I. Intensity Image (Grey Scale Image)
This form represents an image as a matrix where every element has a value corresponding
to how bright/ dark the pixel at the corresponding position should be coloured. There are
two ways to represent the brightness of the pixel:
6
1. The double class (or data type) format. This assigns a floating number ("a number
with decimals") in the range -10308 to +10308 for each pixel. Values of scaled class
double are in the range [0,1]. The value 0 corresponds to black and the value 1
corresponds to white.
2. The other class uint8 assigns an integer between 0 and 255 to represent the intensity
of a pixel. The value 0 corresponds to black and 255 to white. The class uint8 only
requires roughly 1/8 of the storage compared to the class double. However, many
mathematical functions can only be applied to the double class.
II. Binary Image
The binary image format also stores an image as a matrix but can colour a pixel as black
or white (and nothing in between): 0 – is for black and a 1 – is for white.
III. Indexed Image
This is a practical way of representing colour images. An indexed image stores an image as
two arrays. The first array has the same size as the image and one number for each pixel.
The second array (matrix) is called colour map and its size may be different from the image
size. The numbers in the first matrix represent an instruction of what number to use in the
colour map matrix.
IV. RGB Image
This format represents an image with three matrices of sizes matching the image format.
Each matrix corresponds to one of the colours red, green or blue and gives an instruction
of how much of each of these colours a certain pixel should use. Colours are always
represented with non-negative numbers.
Guidance on Performing Lab Session 1 – Part I
Demos in MATLAB
>> demo MATLAB % Opens a window from which you can select a demo for different tools
Workspace and saving results
To see the variables in the workspace: who, whos
To clear the variables in the workspace: clear
To save the variables in the workspace: save name_of_a_file.mat
To load the data/ image from a file: load name_of_a_file.mat
Examples of Reading images in MATLAB
>> clear all % Clears the workspace in MATLAB
>> I = imread('Dog.jpg'); %
>> size(I) % Gives the size of the image
>> imshow(I); % Visualises the image
>> Ig = rgb2gray(I); % Converts a colour image into a grey level image
>> imshow(Ig)
1. The first line clears all variables from the workspace
2. The second line reads the image file into a 3 dimensional array (x, y, color). MATLAB
can read many image file formats, so you do not have to worry about the details
3. Next, we will have information about the image size of the image
4. Visualise the colour image
5. This line converts an RGB image into a grey image. This is not necessary if the image
is already a grey level image.
6. Visualise the grey image
7
Writing Images in MATLAB
Images are written to disk using function imwrite, which has the following basic syntax:
imwrite(I,’filename’)
The string in filename must include a recognised file format extension (tiff, jpeg, gif, bmp,
png or xwd).
>> imwrite(I,’Dog1.jpg’); % The string contained in filename
Next, you can check the information about the graphics file, by using imfinfo.
Type: imfinfo Dog.jpg
Use the commands, whos and ls to visualise the variables in the workspace.
Changing the Image Brightness
Change the brightness of your image by adding a constant value to all pixel values, resp. by
subtracting a constant value to all pixel values. For instance:
>> I_b = I – 100;
>> figure, imshow(I_b)
>> I_s = I + 100;
>> figure, imshow(I_s)
Flipping an Image
Apply flipLtRt.m function (provided) to your image to flip an image. Visualise the results.
Detection of an Area of a Predefined Colour
Change the colour of the white pixels of an image to yellow on the image
'duckMallardDrake.jpg':
% Color the duck yellow!
im= imread('duckMallardDrake.jpg');
imshow(im);
[nr,nc,np]= size(im);
newIm= zeros(nr,nc,np);
newIm= uint8(newIm);
for r= 1:nr
 for c= 1:nc
 if ( im(r,c,1)>180 && im(r,c,2)>180 && im(r,c,3)>180 )
 % white feather of the duck; now change it to yellow
 newIm(r,c,1)= 225;
 newIm(r,c,2)= 225;
 newIm(r,c,3)= 0;
 else % the rest of the picture; no change
 for p= 1:np
 newIm(r,c,p)= im(r,c,p);
 end
 end
 end
end
figure
imshow(newIm)
8
Another example on finding an area of a predefined colour. Find the pixels indexes with the yellow
colour on the image ‘Two_colour.jpg’.
im = imread('Two_colour.jpg'); % read the image
imshow(im);
% extract RGB channels separatelly
red_channel = im(:, :, 1);
green_channel = im(:, :, 2);
blue_channel = im(:, :, 3);
% label pixels of yellow colour
yellow_map = green_channel > 150 & red_channel > 150 & blue_channel < 50;
% extract pixels indexes
[i_yellow, j_yellow] = find(yellow_map > 0);
Visualise the results. Note that plot and scatter commands work with spatial coordinates.
% visualise the results
figure;
imshow(im); % plot the image
hold on;
scatter(j_yellow, i_yellow, 5, 'filled') % highlighted the yellow pixels
Conversion between Different Formats
1. Select your own image.
2. Read a colour image (imread command). Convert your RGB colour image to grey and
then to HSV format (rgb2gray and rgb2hsv commands, respectively).
3. Convert your RGB image into a binary format (im2bw command) and visualise the
result. Use at least 3 more operations converting images from one format to another.
This part is not required for the report, as mentioned in the assessment criteria section.
The conversion to a binary image is called binarisation. Binarisation is based on a applying
a threshold on the image intensity and the process is called thresholding. The output binary
image has values of 0 for black for all pixels in the input image with luminance less than
the threshold level and 1 (white) for all other pixels.
Understanding Image Histogram
1. Experiment with a grey scale image, calculate the histogram and visualise it. There are
various ways to plot an image histogram:
1. imhist, 2. bar 3. stem 4. plot.
Show results with them. What could you say about the dominating colours of the objects/
images from the histograms?
Example Code:
clear all
I = imread('image.jpg');
Im_grey = rgb2gray(I);
figure, imhist(Im_grey);
xlabel('Number of bins (256 by default for a greyscale image)')
ylabel('Histogram counts')
You can use the bar function to plot the image histogram, in the following way:
9
h = imhist(Im_grey);
h1 = h(1:10:256);
horz = 1:10:256;
figure, bar(horz,h1)
See the difference compared with what plot() function will give you:
figure, plot(h)
2. Calculate and visualise the histogram of an RGB image
In MATLAB you can only use the built in ‘hist’ on one channel at a time. One way to display
the histogram of an image is to convert it into a grayscale format with rgb2gray and apply
the imhist function. Another approach is to work with the RGB image in the following way.
First, we convert the image into double format and we can calculate for each channel:
r= double(I(:,:,1));
g = double(I(:,:,2));
b = double(I(:,:,3));
figure, hist(r(:),124)
title('Histogram of the red colour')
figure, hist(g(:),124)
title('Histogram of the green colour')
figure, hist(b(:),124)
title('Histogram of the blue colour')
Now repeat again the binarisation process after you choose the threshold value
appropriately, based on the histogram that you observe. This threshold value must be
normalised on the range [0, 1] to be used with the function im2bw.
Example: If we choose the median value 128 of the full range [0, 255] as the threshold, then
you can perform binarisation of image Im with the function.
ImBinary=im2bw(I,128/255);
Vary the threshold and comment on the results.
3. Calculate and visualise the histogram of an HSV image
For an HSV histogram you can use the same recommendation as for an RGB histogram,
given above. Another way of calculating the histogram of in the HSV space is given below.
% Display the original image.
subplot(2, 4, 1);
imshow(rgbImage, [ ]);
title('Original RGB image');
% Convert to HSV color space
hsvimage = rgb2hsv(rgbImage);
% Extract out the individual channels.
hueImage = hsvimage(:,:,1);
satImage = hsvimage(:,:,2);
valueImage = hsvimage(:,:,3);
% Display the individual channels.
subplot(2, 4, 2);
imshow(hueImage, [ ]);
10
title('Hue Image');
subplot(2, 4, 3);
imshow(satImage, [ ]);
title('Saturation Image');
subplot(2, 4, 4);
imshow(valueImage, [ ]);
title('Value Image');
% Take histograms
[hCount, hValues] = imhist(hueImage(:), 18);
[sCount, sValues] = imhist(satImage(:), 3);
[vCount, vValues] = imhist(valueImage(:), 3);
% Plot histograms.
subplot(2, 4, 5);
bar(hValues, hCount);
title('Hue Histogram');
subplot(2, 4, 6);
bar(sValues, sCount);
title('Saturation Histogram');
subplot(2, 4, 7);
bar(vValues, vCount);
title('Value Histogram');
% Alert user that we're done.
message = sprintf('Done processing this image.n Maximize and check
out the figure window.');
msgbox(message);
Include the results of understanding the RGB image histogram in your report.
Understanding image histogram – the difference between one-colour and two-colour
images
An image histogram is a good tool for image understanding. For example, image histograms
can be used to distinguish a one-colour image (or an object in the image) from a two-colour
image (or an object in the image):
1. Read ‘One_colour.jpg’ and ‘Two_colour.jpg’ (with imread);
2. Convert both images into the greyscale format (with rgb2gray);
3. Calculate and visualise the histograms for both images (with imhist);
What is the differences between these colour histograms? What do we learn from the
visualised red, green and blue components of the image histogram?
11
Lab Session 1 - Part II: Edge Detection and Segmentation
of Static Objects
In this practical session, you will continue to study basic image processing techniques. You
will enhance the contrast of images and perform different operations on them. You will learn
how to model different types of noise in images and how to remove the noise from an image.
You will also learn approaches for edge detection and static objects segmentation.
Guidance on Performing Lab Session 1 – Part II
1. Read a preliminary chosen image ‘Image.gif’ (with imread);
Enhancement Contrast
2. Compute an image histogram for the image (imhist). Visualise the results. Analysing
the histogram think about the best way of enhancement the image, recall the methods
from the lectures;
3. Apply the histogram equalisation operation to the image (histeq). Visualise the results.
Compute an image histogram for the corrected image. Visualise the results. Compare it
with the original histogram. Does this method of enhancement actually enhance image
quality?
4. Apply the gamma correction of the histogram to the image (imadjust). Visualise the
results. Experiment with different values for gamma and find the optimal one. Compute
the image histogram to the corrected image. Visualise the results. Compare the
histogram and the image with the original ones and the results of the histogram
equalisation. Which method of enhancement performs better?
Images with Different Types of Noise and Image Denoising
5. Synthesise two images from the image ‘Image.gif’ with two types of noise – Gaussian
and “salt and pepper” (imnoise). Visualise the results;
6. Apply the Gaussian filter to the Gaussian noised image (imgaussfilt). Find the optimal
filter parameters values. Visualise the results;
7. Apply the Gaussian filter to the salt and pepper noisy image (imgaussfilt), visualise
and discuss the results.
8. Apply the median filter to the salt and pepper noised image (medfilt2). Find the
optimal filter parameter values. Visualise the results;
Static Objects Segmentation by Edge Detection
9. Find edges on the image ‘Image.gif’ with the Sobel operator (edge(…, ‘sobel’, …)).
Vary the threshold parameter value and draw conclusions about its influence over the
quality of the segmented image. Visualise the results with the optimal threshold value;
10.Repeat the step 9 with the Canny operator (edge(…, ‘canny’, …));
11.Repeat the step 9 with the Prewitt operator (edge(…, ‘prewitt’, …));
Include the resulting images with segmented objects and add conclusions about static
objects segmentation using edge detection methods (from steps 9-11) in your report.
12
Lab Session 2: Object Motion Detection & Tracking
This lab session is focused on motion detection and tracking in video sequences. You will
apply the optical flow algorithm to object tracking by using corner points. The optical flow
calculates the motion of image pixels from one frame to another.
You will apply the optical flow algorithm to the “interesting” corner points only since the
numerical stability of the algorithm is guaranteed in these points only.
You need to find first the “interesting” points, and then apply an optical flow algorithm only
to them.
Background Knowledge
Corner Points
In many applications of image and video processing it is easier to work with “features”
(“characteristic points” or “local feature points”) rather than with all pixels of a frame. These
“features” or “points” should differ from their neighbours in some area.
Corner points are an example of such features. A corner point is a point whose
surrounding points differ from the surroundings of its neighbours. Figure 2.1 shows an
example of three types of points: 1) a top corner point, 2) an edge point and 3) a point
inside the object (internal point).
● The corner point is surrounded with the solid line square and its neighbour point is
surrounded by the dotted square. The corner point and its neighbour point have
different surrounding areas.
● For the edge point its surrounding is the same as the surroundings of its neighbour
point in one direction and it is different in any other direction.
● The internal point is surrounded by the same neighbourhood as all other near points
around it.
Figure 2.1. Illustration of the difference between corner, edge and internal points of an object.
Please note that the analysed points are surrounded with a square and the dotted square indicates
the area around neighbour points.
One of the most popular methods for detecting corner points is the Harris corner detector.
It is used by default in the MATLAB function corner.
The Optical Flow Approach
An optical flow can be represented as a vector field of apparent pixel motion between
frames. Optical flow estimation is one of the widely methods for motion detection in robotics
and computer vision. Given two images I1 and I2, optical flow estimation algorithms can find
the vector field:
13
where [N, M] is the image size. The vector field contains displacement vectors for each pixel.
Pixel (x, y) from the image I1 will have location (x+ui, j,y + vi, j) in the image I2.
There are many different methods for optical flow estimation. The Lucas-Kanade algorithm
is one of the most popular algorithms. This lab considers only the Lucas-Kanade algorithm.
It has the following assumptions:
1. Brightness (colour) consistency. It means that pixels do not change their colour
between frames.
2. Spatial similarity. It means that neighbours of each pixel have similar motion
vectors.
3. Small displacement. This means that the displacement or motion vectors are small
and a Taylor series expansion can be applied.
With these assumptions in place, the calculation of the optical flow reduces to solving an
overdetermined linear system. This is done by the Least Square method. The conditions
of the overdetermined linear system solution, lead to the Lucas-Kanade algorithm. You will
apply the Lucas-Kanade algorithm to the “interesting” (“feature”) points only.
Tracking with the optical flow
Object tracking is the process of object localisation and association of its location on the
current frame with the previous ones, building a trajectory for each object.
Optical flow estimation algorithms provide a tool to calculate a displacement vector from one
frame to another. This information can be used for tracking purposes. Indeed, if we
determine the point of interest in the first frame, we can compute a displacement vector for
it for every successive frame, using an optical flow estimation algorithm. The combination of
the positions of the points, computed by displacement vectors constitutes the trajectory of
this point.
If we want to track a non-point object, we can find “interesting” points on the object, track
them and use a median position of the “interesting” points as a position for the object. Since
optical flow estimation algorithms are not perfect and can lose tracking points, one should
reinitialise “interesting” points from time to time. At any time instant, the introduced
“interesting” points should satisfy the following constrains:
● A point should not be far from the current median position of the object – it has to be
inside the current bounding box;
● A point should be on the object – in your task you will use colour for this constraint;
● Each pair of tracking points has to differ from each other – if two points are too close
to each other, one of them will be deleted.
As the result, we have the following algorithm:
1. Build a colour template of the object in the first frame.
2. If necessary (in your object detection task) read the next frame.
3. Detect “interesting” points of the object in the current frame. Make sure they are
satisfying all the constraints, mentioned above.
4. Initialise tracks with detected and filtered “interesting” points.
5. Compute an optical flow for every “interesting” point between successive frames
6. Compute new positions of the tracks by adding the optical flow vectors to the current
positions in the tracks.
7. Make sure the new positions of the tracks satisfy the second and third constraints,
mentioned above. If not, delete those tracks.
14
8. Compute the median position of the new positions of the tracks. Move the bounding
box to the new median position.
9. Make sure the new positions of the tracks are inside the bounding box. If not, delete
those tracks.
10.Repeat steps 5-9. Introduce the new “interesting” points of the object in every k
frames.
It is recommended to use k = 5.
Optical Flow Estimation and Visualisation with MATLAB
From MATLAB there is an optical flow object for optical flow estimation – opticalFlowLK
(http://uk.mathworks.com/help/vision/ref/opticalflowlk-class.html)
To estimate an optical flow you will use the command estimateFlow
(http://uk.mathworks.com/help/vision/ref/opticalflowlk.estimateflow.html).
videoReader = VideoReader('…');
frameRGB = readFrame(videoReader);
frameGrey = rgb2gray(frameRGB);
opticFlow = opticalFlowLK('NoiseThreshold',0.009);
flow = estimateFlow(opticFlow,frameGrey);
You can use the following fields of the flow object:
● flow.Vx – the horizontal component of the velocity. size(flow.Vx) ==
size(frameGrey). flow.Vx(i, j) is the horizontal component of the velocity of the pixel
(i, j).
● flow.Vy – the vertical component of the velocity. size(flow.Vy) == size(frameGrey).
flow.Vy(i, j) is the vertical component of the velocity of the pixel (i, j).
You need the Computer Vision System toolbox from MATLAB.
For visualisation of the optical flow there are several options:
1. with the command plot
(http://uk.mathworks.com/help/vision/ref/opticalflow.plot.html)
2. with the command quiver(u, -v, 0), where u, v are the horizontal and vertical
displacements, respectively. Note, that it may take some time to visualise the
results on your Figure.
*Moving a bounding box to a new position – help for the provided
function
In the object tracking task you could move a bounding box around an object to a new position
between frames. The function ShiftBbox could help perform this task.
The function ShiftBbox has two input arguments:
● input_bbox – the current bounding box in the format: input_bbox is a 1 x 4 vector
The. input_bbox(1:2) are the spatial coordinates of the left top corner of the
bounding box, input_bbox(3) is the horizontal size of the bounding box,
input_bbox(4) is the vertical size of the bounding box;
● new_center – the new position of the centre of the bounding box in spatial
coordinates
The function ShiftBbox has one output:
15
● shifted_bbox – the updated bounding box in the same format as the input_bbox
argument. The centre of the updated bounding box is equal to the new_center input
parameter
Guidance for Performing Lab Session on Optical Flow
1. You can find corner points (with the corner MATLAB function) on the images
‘red_square_static.jpg’ and ‘GingerBreadMan_first.jpg’. Note that the corner
function works with greyscale images. You need to convert first the input images to
the greyscale format. Next, you can apply the function with different maximum
number of corners. Include the resulting images in your report. You need to show
the results only with one corners value.
2. Find optical flow of the pixels which moved from the image
‘GingerBreadMan_first.jpg’ to the image ‘GingerBreadMan_second.jpg’
(opticalFlowLK, estimateFlow). Note that the estimateFlow function works with
greyscale images. You need to convert the input images to greyscale format.
Include the visualisation of the calculated optical flow by any of the provided
methods in your report.
3. Perform tracking of a single point using the optical flow algorithm in the video
‘red_square_video.mp4’:
a. Create a video reader object to read the ‘red_square_video.mp4’ video
(VideoReader);
b. Create an optical flow object (opticalFlowLK);
c. Read the first frame (readFrame);
d. Find left top point of the red square on the first frame (manually, you can use
corner command to help);
e. Add position of this point as the first position in the track;
f. Run the function estimateFlow with the first frame to initialise the optical
flow object;
g. Read the next frame (readFrame);
h. We know that Lucas-Kanade optical flow estimation works well only for
“interesting” points. The estimateFlow function works with the current frame
in comparison with the previous one. It means that we should use the
“interesting” point from the current frame and not the point from the previous
frame, which you detected in step c. This is the reason why we should find
the nearest corner point for the position of the point of interest from the frame
1 to calculate an optical flow for it.
Find corner points (corner) in frame 2;
i. Find the nearest corner point to your first position from the track;
j. Compute an optical flow (with the estimateFlow command) for this point
(between frames 1 and 2);
k. Compute a new position of the point by adding the found velocity vector to
the current position:
l.
x_new = corner_x + flow.Vx(round(corner_y), round(corner_x));
y_new = corner_y + flow.Vy(round(corner_y), round(corner_x));
where corner_x and corner_y denote the coordinates of the nearest corner, flow is
the optical flow object, the output of the estimateFlow function;
m. Add the new position of the point as the second position in the track;
n. Read the next frame (readFrame);
16
o. As optical flow estimation is not perfect, your new point can differ from the
actual corner. We also know that the Lucas-Kanade optical flow estimation
algorithm works well only for “interesting” points. Hence, we should find the
nearest corner point for our estimated position of the point of interest and
calculate an optical flow for it.
Find corner points (with the corner function) in frame 3;
p. Find the nearest corner point to your second position from the track;
q. Compute an optical flow ( with estimateFlow) for this nearest point (between
frames 2 and 3);
r. Compute a new position of the point by adding the found velocity vector to
the current position;
s. Add the new position of the corner as the third position in the track;
t. Read the next frame (readFrame);
u. Find corner points (corner) in frame 4;
v. Find nearest corner point to your third position from the track;
w. Compute an optical flow (estimateFlow) for this nearest point (between
frames 3 and 4) and so on.
Visualise the track on the last frame of the video. Plot on the same figure the
estimated trajectory and the ground truth trajectory (it is available in the file
red_square_gt.mat, which contains the correct trajectory of the left top point of the
red square in the variable gt_track_spatial. Note that the ground truth trajectory is
given in the spatial coordinate frame). The file new_red_square_gt.mat contains the
groundtruth in a way to match the dimensions. Note that the ground truth data has
one extra point at the end. You do not need to use it. When you save your results
from the optical flow, you do not need to keep the first point, which comes from the
algorithm initialisation. Then, with these two changes in mind, you can calculate the
root mean square error of the estimated trajectory with respect to the ground truth
over all video frames. Compute the error for each point of the trajectory. In your
report, write the equation that you used to calculate the root mean square error.
Plot the results. Include the plot in your report. Please use the zoom-in functionality
of the MATLAB figure in order to visualise well the estimation errors. Make the
conclusion about the accuracy of the method.
17
Lab Session 3: Background Subtraction
This lab session is focused on video processing, in particular on background subtraction
methods for automatic object detection from static cameras. You will learn how the basic
frame differencing algorithm for object detection works in a sequence of video frames, as
provided by an optical video camera. You will compare the results with the Gaussian
mixture model for background subtraction.
For this lab session you will need to have the Computer Vision System Toolbox in MATLAB.
Background Knowledge
Background Subtraction
As the name suggests, background subtraction is the process of separating out foreground
objects (the moving objects) from the background (the static environment) in a sequence of
video frames. The process can be performed off-line, but more commonly is needed in real
time. Background subtraction is used in many emerging video applications, such as video
surveillance, traffic monitoring, and gesture recognition for human-machine interfaces, to
name a few.
Frame Differencing Approach
The frame differencing approach is the simplest form of background subtraction. It usually
works on the video frames, after converting them from colour to greyscale format. Hence,
the first thing to do is to convert the video frames arriving from the camera in RGB or HSV
format to a greyscale format. Next, the current grey scale frame is simply subtracted from
the previous frame, and if the difference in pixel intensity values for a given pixel is greater
than a pre-specified threshold Ts, the pixel is considered as being a part of the foreground
The algorithm steps are listed below:
1. Convert the incoming frame 'fr' to greyscale (here we assume a color RGB sensor)
2. Subtract the current frame from the background model 'bg_bw' (in this case it is just
the previous frame)
3. For each pixel, if the difference between the current frame and backround 'fr_diff
(j,k)' is greater than a threshold 'thresh', the pixel is considered part of the
foreground.
Gaussian Mixture Model Approach
In the Gaussian mixture model approach one builds the model of a background. It is
assumed that the intensity of the background pixels is changeable. The distribution of the
intensity is modelled as the mixture of Gaussian distributions. All the components of the
mixture are scored based on the component weight in the mixture and its variance
(respectively standard deviation). The components with the bigger scores are labelled as
background, and others are labelled as foreground.
This approach is implemented in MATLAB, in the Computer Vision System Toolbox. You
can use the vision.ForegroundDetector function to perform foreground detection by the
Gaussian mixture model approach.
Guidance for Performing Lab Session on Background Subtraction
Video Reading/ Writing
1. Create the object to read the video ‘car_tracking.mp4’:
source = VideoReader('car-tracking.mp4');
Note that VideoReader supports the following file extensions: “.avi”, “.mj2”, “.mp4”
or “.m4v” and others;
18
2. Create the object to write the video into the file ‘results.mp4’:
output = VideoWriter('results.mp4', 'MPEG-4');
Note that VideoWriter supports the following file extensions: “.avi”, “.mj2”, “.mp4” or
“.m4v”;
3. Open the writer object in order to have an opportunity to write anything to the file:
open(output);
4. Read a frame from the input file:
frame = readFrame(source);
5. Visualise the frame:
imshow(frame);
6. Write the frame to the output file:
writeVideo(output, frame);
7. Make a loop to read and write frames. In order to read all frames you can check
whether the reader object still has frames:
while hasFrame(source)
 frame = readFrame(source);
 imshow(frame);
 writeVideo(output, frame);
end
8. To finalise the output video close the video writer object:
close(output);
Frame Differencing Algorithm for Background Subtraction
One possible solution includes the following steps:
9. Open the script “Frame_difference.m”. You need to vary the threshold parameter in
the frame differencing algorithm for background subtraction and draw conclusions.
The following steps explain the commands in the script;
10.Create and open a video reader object to read the video ‘car-tracking.mp4’ (with
VideoReader);
11.Set the threshold parameter value. Vary the threshold and comment on its
influence over the detection process. Include your conclusions in your report, take
snapshot video frames and show them in your report.
12.Read the first frame as a background (with readFrame) and convert it to the
greyscale format (rgb2gray);
13.Write a loop on frames:
a. Read a new frame (readFrame);
b. Convert the new frame to the greyscale format (rgb2gray);
c. Compute the difference between the current frame and the background
frame;
d. Create the frame with foreground mask, where the difference is bigger than
the threshold the output pixel is white, otherwise is black;
e. Update the background frame with the current one;
f. Visualise the results;
g. Write the foreground frame to the output video;
14.Close the video writer object
Gaussian Mixture Model Algorithm for Background Subtraction
15.Open the script “Gaussian_mixture_models.m”. You need to vary the number of
initial frames to learn the background model and the number of Gaussain
components in the mixture of the Gaussian mixture model algorithm for background
subtraction, the other parameters of the function and draw conclusions. The script is
very similar to the previous one, so we highlight only differences:
19
a. You will use the foreground detector object from the Computer Vision
System toolbox in MATLAB – vision.ForegroundDetector();
b. You need to vary three parameters:
i. The number of initial frames for training a background model;
ii. The number of Gaussians in the mixture;
iii. The threshold for decision making.
c. To apply the foreground detector to a new frame, use the step function. It
returns the foreground mask of the frame in the logical format.
Comment in your report on how the parameters influence the detection performance. Take
snapshot video frames and show them in your report in order to support your conclusions.
MATLAB scripts
Frame_difference.m
clear all
close all
% read the video
source = VideoReader('car-tracking.mp4');
% create and open the object to write the results
output = VideoWriter('frame_difference_output.mp4', 'MPEG-4');
open(output);
thresh = 25; % A parameter to vary
% read the first frame of the video as a background model
bg = readFrame(source);
bg_bw = rgb2gray(bg); % convert background to greyscale
% --------------------- process frames ----------------------------------
-
% loop all the frames
while hasFrame(source)
 fr = readFrame(source); % read in frame
 fr_bw = rgb2gray(fr); % convert frame to greyscale
 fr_diff = abs(double(fr_bw) - double(bg_bw)); % cast operands as
double to avoid negative overflow

 % if fr_diff > thresh pixel in foreground
 fg = uint8(zeros(size(bg_bw)));
 fg(fr_diff > thresh) = 255;

 % update the background model
 bg_bw = fr_bw;

 % visualise the results
 figure(1),subplot(3,1,1), imshow(fr)
 subplot(3,1,2), imshow(fr_bw)
 subplot(3,1,3), imshow(fg)
 drawnow

 writeVideo(output, fg); % save frame into the output video
end
20
close(output); % save video
Gaussian_mixture_models.m
clear all
close all
% read the video
source = VideoReader('car-tracking.mp4');
% create and open the object to write the results
output = VideoWriter('gmm_output.mp4', 'MPEG-4');
open(output);
% create foreground detector object
n_frames = 10; % a parameter to vary
n_gaussians = 3; % a parameter to vary
detector = vision.ForegroundDetector('NumTrainingFrames', n_frames,
'NumGaussians', n_gaussians);
% --------------------- process frames ----------------------------------
-
% loop all the frames
while hasFrame(source)
 fr = readFrame(source); % read in frame

 fgMask = step(detector, fr); % compute the foreground mask by
Gaussian mixture models

 % create frame with foreground detection
 fg = uint8(zeros(size(fr, 1), size(fr, 2)));
 fg(fgMask) = 255;

 % visualise the results
 figure(1),subplot(2,1,1), imshow(fr)
 subplot(2,1,2), imshow(fg)
 drawnow

 writeVideo(output, fg); % save frame into the output video
end
close(output); % save video
21
Lab 4: Robot Treasure Hunting – Towards Autonomous Decision
Making with Computer Vision
A robot is given a task to search and find “treasures” in three tasks: easy, medium and
difficult The starting point of the robot search is where the red arrow is.
In this practical session you will apply image processing techniques to perform autonomous
robot decision making from images. The task focuses to the development of a search
algorithm in images with a single treasure or multiple treasures. One possible solution is
with the arrows in the images and other features available in these images until reach one
of the objects. You are provided with a starting script which needs completion and several
functions to be written.
Background Knowledge
Image Plane
In MATLAB there are several types of coordinate systems for images, we will focus on
two of them: pixel and spatial frames.
Pixel Coordinates
When you read an image with imread command you get a 3D matrix (for an RGB image) or
2D matrix (for a grey or binary image). These are shown on Figures 4.1 and 4.2, respectively.
Figure 4.1 Figure 4.2
You can use the matrix elements (row and columns) to access pixel values. For example,
to get an intensity level of the highlighting pixel on the right figure you can use:
im = imread('image'),
im(2, 3)
Spatial Coordinates
In a spatial coordinate system, locations in an image are positions on a plane and they
are described as x and y coordinates (rather than by rows and columns as before). Figure
4.3 shows the spatial coordinate system for an image. Note that the centre of the pixel in
the 2nd row and 3rd column (marked as *) has the spatial coordinates: x = 3, y = 2. For
example, to plot this mark you can use the plot function which works with spatial
coordinates):
im = imread('image')
Figure 2. Pixel coordinate system for a grey
scale image
Figure 1. Pixel coordinate system for an
RGB image
22
imshow(im);
hold on
plot(3, 2, '*black')
hold off
Figure 4.3. Spatial coordinate system for an image
Image Binarisation
Often, in image analysis it is easier to work with binary images rather than with grey scale
images or colour ones. In these cases usually white pixels (with label 1) correspond to the
objects of interest and black pixels (with label 0) correspond to background. For image
binarisation it is important to tune the threshold parameter. If the threshold value is too big,
some objects or parts of objects can be lost, if the threshold is too low some parts of
background can be labelled with 1. In MATLAB the command im2bw performs the
conversion of an image to a binary format, the operation which we call image binarisation.
Connected Component Analysis
Once you distinguish the objects of interest from the background (for example, by image
binarisation), you may want to distinguish each object. One way to distinguish objects, if
there is no occlusion between them, is to compute connected components of the binary
image. The idea is to add the same label to pixels that are connected (based on an image
feature or other criterion). Pixels that are not connected with the current region will be
assigned a different label. Two pixels are called “connected” if it is possible to build a path
from one pixel to another, using only foreground pixels. By foreground we mean the object
of interest and by background we denote the environment. Two successive pixels in the
path must be neighbours. The neighbour areas can be different. For example, you can see
on Figure 4.4 the 4-connected area (on the left) and the 8-connected area (on the right)
where the red pixel is the current one, the blue ones are neighbours.
Figure 4.4. Examples of neighbourhoods used in connected component analysis. A 4-connected
area is shown on the left hand side, and an 8-connected area on the right hand side.
23
A possible result of labelling of connected components is presented on Figure 4.5. The
image on the left shows the input binary image. The matrix on the right represents the
labelled output image.
Figure 4.5. The results of connected component analysis of a binary image. The input image is on
the left hand side, the labelled output is on the right hand side.
In MATLAB, the command bwlabel performs connected component analysis of an input
binary image and returns labelled output (Figure 5, right). The label2rgb command is useful
for the visualisation of the results.
Performing Lab Sessions 4: Robot Treasure Hunting
In this lab session you will need to extend the provided MATLAB script and finalise it. You
are already given most of the functions for performing the robot treasure hunting tasks. The
parts, which are missing and which you will complete by yourself, are highlighted by the red
colour.
1. Open the script “treasure_hunting.m”. In this script you have the main necessary
commands, except for the highlighted with blue colour. Make sure you have the input
images: “Treasure_simple.jpg”, “Treasure_medium.jpg” and “Treasure_hard.jpg”.
2. Convert all given images in binary format, using im2bw. You should find the
appropriate threshold value, so that the binarisation operation applies to all objects
from the input images. Include the results of the binarisation of one of the images and the
value of the threshold value in your report.
3. Find the connected components of your obtained binary images (with bwlabel ) and
visualise (with label2rgb) the detected connected components.You have this functionality
in the provided script. You do not need to include this visualisation in your report.
4. Compute different characteristics of the connected components (with regionprops).
Visualise the bounding boxes of the objects (BoundingBox field from the regionprops
function output, rectangle(‘Position’, …)). You have this functionality in the provided
script. It is not required to include this visualisation in your report.
5. Develop a function to distinguish arrows from other objects. What does differ the
arrows from the other objects? You may use any ideas from lectures, the previous lab
session or common sense for your function. Include the brief description of main idea of
your function in your report and the actual code of the function in the appendix of your
report. Hint: arrows have points of different colour.
6. Find the starting red arrow. You have this functionality in the provided script.
24
7. Develop a function to find the label of the next nearest object to which the current
arrow indicates.
Hint 1: to set a line it is enough to have two points.
Hint 2: for each arrow you may extract the centroid point and the yellow one.
Hint 3: a vector (x2 – x1, y2 – y1) points to the direction from the point (x1, y1) to the point
(x2, y2).
8. Apply your functions to find the treasure in the images. Visualise the treasure and the
path to it from the starting arrow. You have this functionality in the provided script. Include
your visualisation in your report for all images.
9. Other solutions of this task are possible. If you propose a different solution, include a
brief description of it and provide a diagram of the main idea of your solution. Include your
functions in your report.
Additional Guidance on Performing the Robot Treasure Hunting Task
1. Areas of the arrows are different from the treasures.
2. All arrows have a yellow dot inside, while treasures do not contain any yellow pixel.
You may have other ideas, you may not necessarily use the hints. Include the brief
description of the main idea of your function in your report. Please include the code of the
function in an Appendix of your report.
In order to find the next object, you could consider these suggestions:
1. For each arrow you may extract the centroid point and the centroid of the yellow dot.
2. To set a line, it is enough to have two points.
3. A vector (x2-x1,y2-y1) points the direction from the point (x1,y1) to the point (x2,y2).
After extending the vector, you may find the vector enter the bounding box of another
arrow or the treasure. Therefore, that arrow is the next object.
The final result can look, for example, like that:
25
MATLAB script which needs to be completed
close all;
clear all;
%% Reading image
im = imread('Treasure_simple.jpg'); % change name to process other images
imshow(im); pause;
%% Binarisation
bin_threshold = 0; % parameter to vary
bin_im = im2bw(im, bin_threshold);
imshow(bin_im); pause;
%% Extracting connected components
con_com = bwlabel(bin_im);
imshow(label2rgb(con_com)); pause;
%% Computing objects properties
props = regionprops(con_com);
%% Drawing bounding boxes
n_objects = numel(props);
imshow(im);
hold on;
for object_id = 1 : n_objects
 rectangle('Position', props(object_id).BoundingBox, 'EdgeColor', 'b');
end
hold off; pause;
%% Arrow/non-arrow determination
% You should develop a function arrow_finder, which returns the IDs of the arror
objects.
% IDs are from the connected component analysis order. You may use any
parameters for your function.
arrow_ind = arrow_finder();
%% Finding red arrow
n_arrows = numel(arrow_ind);
start_arrow_id = 0;
% check each arrow until find the red one
for arrow_num = 1 : n_arrows
 object_id = arrow_ind(arrow_num); % determine the arrow id

 % extract colour of the centroid point of the current arrow
26
 centroid_colour = im(round(props(object_id).Centroid(2)),
round(props(object_id).Centroid(1)), :);
 if centroid_colour(:, :, 1) > 240 && centroid_colour(:, :, 2) < 10 &&
centroid_colour(:, :, 3) < 10
 % the centroid point is red, memorise its id and break the loop
 start_arrow_id = object_id;
 break;
 end
end
%% Hunting
cur_object = start_arrow_id; % start from the red arrow
path = cur_object;
% while the current object is an arrow, continue to search
while ismember(cur_object, arrow_ind)
 % You should develop a function next_object_finder, which returns
 % the ID of the nearest object, which is pointed at by the current
 % arrow. You may use any other parameters for your function.
 cur_object = next_object_finder(cur_object);
 path(end + 1) = cur_object;
end
%% visualisation of the path
imshow(im);
hold on;
for path_element = 1 : numel(path) - 1
 object_id = path(path_element); % determine the object id
 rectangle('Position', props(object_id).BoundingBox, 'EdgeColor', 'y');
 str = num2str(path_element);
 text(props(object_id).BoundingBox(1), props(object_id).BoundingBox(2), str,
'Color', 'r', 'FontWeight', 'bold', 'FontSize', 14);
end
% visualisation of the treasure
treasure_id = path(end);
rectangle('Position', props(treasure_id).BoundingBox, 'EdgeColor', 'g');
Lab Session 5: Convolutional Neural Networks for Image
Classification
This session is about Convolutional Neural Networks (CNNs) and how to perform image
classification tasks with them. You will learn how to design train, and evaluate a CNN using
the Deep Learning Toolbox in MATLAB.
Please keep in mind that deep learning tasks such as training a Convolutional Neural
Network (CNN) can be computationally intensive. Considering computational constraints,
the choice is for LeNet-5 - a relatively small but robust neural network architecture. Its small
size should make it feasible to train the model even on a regular laptop without the necessity
for a high-end GPU.
In this case you will implement a Convolutional Neural Network (CNN) called LeNet-5 and
learn about the principle of operation of CNNs [Lecun et al., 1998]. You will solve an image
classification task on the data set named Digits [Digit Data, 2023]. You need to run the code
provided below and assess the performance of the CNN result. The next subtask includes
27
improving these results by performing some of the operations differently, e.g. with a different
kernel size for the convolutional layers (3x3 and 5x5, stride 1 and 2), different pooling
operation (max pooling and with average pooling), with and without dropout operations.
Train the CNN with a different number of operations, a different learning rate, training data
region and discuss the results. This task requires varying the training parameters from the
trainingOptions function.
Background Knowledge
I. Convolutional Neural Networks
Convolutional Neural Networks (CNNs) [Alzubaidi et al., 2021] are a class of deep, feedforward artificial neural networks that have been successful in various visual recognition
tasks. CNNs are particularly effective in applications involving spatial data, where the
proximity of features within the input space is important for the task, such as image and
video recognition.
Figure 5.1 below shows the architecture of LeNet-5 [Lecun et al., 1998], which was primarily
designed for handwriting and character recognition. LeNet-5, being one of the earliest
convolutional neural networks, has played a significant role in the development of deep
learning, particularly in the field of computer vision.
Figure 5.1 Architecture of LeNet5 [Lecun et al., 1998]
Despite its relative simplicity compared to newer architectures, it can still serve as a useful
starting point for image recognition tasks.
II. Layers in a Convolutional Neural Network
A typical CNN consists of a sequence of layers, each transforming one volume of activations
to another through a differentiable function. There are three main types of layers to build a
CNN architecture: Convolutional Layer, Pooling Layer, and Fully-Connected Layer.
Convolutional Layer: The primary purpose of a Convolution operation in a CNN is to extract
features from the input image. In this layer, several filters are convolved with the input image
to compute a map of activations called feature maps. These feature maps learn to activate
when they see some specific type of feature at some spatial position in the input.
Below, Figure 5.2 on the left shows the input image, in the middle the filter (mask) and on
the right is the result from the convolution operation, i.e. the obtained feature map.
28
Figure 5.2 Convolution operation on an image
The following Figure 5.3 shows the execution of the convolution operation:
Figure 5.3. The steps showing the convolution operation
Pooling Layer: After each convolutional layer, a pooling layer is often added for downsampling features, which reduces the spatial size of the representation to reduce the amount
of parameters and computation in the network. Pooling layer operates on each feature map
independently to produce the output feature map. Max pooling is the most common type of
pooling, where each output pixel contains the maximum value of a neighbourhood of the
input and it is illustrated with Figure 5.4:
Figure 5.4. Max pooling operation
Fully Connected Layer: The fully connected layer is a traditional Multi-Layer Perceptron
function that uses a softmax activation function in the output layer. The term "Fully
Connected" implies that every neuron in the previous layer is connected to every neuron on
the next layer and it is illustrated below with Figure 5.5:
29
Figure 5.5 Fully Connected Layer
The output from the convolutional and pooling layers represents high-level features of the
input image. The purpose of the Fully Connected layer is to use these features for
classifying the input image into various classes based on the training dataset.
III. Task 1a: Designing a Convolutional Neural Network
Designing a CNN involves several decisions. These include the choice and configuration
of layers (convolutional, pooling, fully connected, normalization etc.), the size of filters in
the convolutional layers, the stride (step size) for those filters, and how to handle the
borders of the data. The design of a CNN is often determined based on the problem
context and the size and complexity of the input data.
IV. Task 1b: Training a Convolutional Neural Network
Typically a CNN is trained via a supervised learning process with a labelled dataset. This
involves feeding the network with labelled training data, where it makes predictions based
on the current state of its weights and biases. In the core of the training is the
backpropagation stage, which adjusts the weights and biases to minimise the network's
error. The error is quantified by a loss function, which, in our case, is the cross-entropy loss
[de Boer et al., 2005].
The cross-entropy loss function is important in many classification tasks as it measures the
difference between the predicted probability distribution (output from the softmax layer) for
each class and the true distribution, where the actual class has a probability of one. The loss
is high when the predicted probability diverges significantly from the actual label, prompting
the network to make substantial adjustments. Conversely, a low loss results from predictions
close to the true labels, requiring less substantial changes. Through backpropagation, the
network uses this loss to update its parameters, aiming to minimise this loss across all
training examples throughout the epochs. The goal of training is to refine these parameters
to reduce the loss function to a point where the network's predictions are as accurate as
possible.
V. Task 2: Evaluating the Performance of a Convolutional Neural Network
Evaluating a CNN performance involves using a separate testing dataset to measure the
network's ability to generalise from the training data to unseen data.
The data is split into trading and testing sets. By working over the whole testing data set,
please calculate the required metrics.
30
The evaluation metrics include accuracy (percentage of correct predictions), precision
(percentage of true positive predictions), recall (percentage of actual positives that were
predicted as positive), and the F1 score (a balance between precision and recall). More
details on the evaluation metrics can be found here [Sokolova and Lapalme, 2009]:
Predicted: Yes Predicted: No
Actual:
Yes True Positives (TP) False Negatives
(FN)
Actual: No False Positives
(FP) True Negatives (TN)
The performance evaluation metrics can be calculated as follows [Annika et al., 2021,
Sokolova and Lapalme, 2009]
● Accuracy = (TP+TN) / (TP+FP+FN+TN)
● Precision = TP / (TP+FP)
● Recall = TP / (TP+FN)
● F1 Score = 2*(Recall * Precision) / (Recall + Precision)
Guidance for Performing Lab Session on Convolutional Neural
Networks
Designing and Training a CNN
1. Load the image dataset: load the Digits dataset from a predefined path in the MATLAB toolbox.
The imageDatastore function handles the loading of image data. All subfolders are included and
folder names are used as labels. The images are resized to 32x32 pixels for compatibility with the
LeNet-5 architecture. The data is then split into training and validation sets, with 70% used for
training and 30% for validation.
% Load the Digits dataset
digitDatasetPath = fullfile(toolboxdir('nnet'), 'nndemos', ...
 'nndatasets', 'DigitDataset');
imds = imageDatastore(digitDatasetPath, ...
 'IncludeSubfolders', true, 'LabelSource', 'foldernames');
imds.ReadFcn = @(loc)imresize(imread(loc), [32, 32]);
% Split the data into training and validation datasets
[imdsTrain, imdsValidation] = splitEachLabel(imds, 0.7, 'randomized');
Define the architecture of the CNN using the layers function: It begins with an input layer
expecting images of size 28x28 with one channel (greyscale). Following this are alternating
convolutional, ReLU activation, and max pooling layers. Then, there are several fully connected
layers with ReLU activation, ending with a softmax layer and a classification output layer. Refer
to the MATLAB documentation to understand how to define each of these layers. In Matlab, the
loss function is implicitly defined through the classificationLayer. Refer to the MATLAB
documentation to understand how to define each of these layers.
31
% Define the LeNet-5 architecture
layers = [
 imageInputLayer([32 32 1],'Name','input')
 convolution2dLayer(5,6,'Padding','same','Name','conv_1')
 averagePooling2dLayer(2,'Stride',2,'Name','avgpool_1')
 convolution2dLayer(5,16,'Padding','same','Name','conv_2')
 averagePooling2dLayer(2,'Stride',2,'Name','avgpool_2')
 fullyConnectedLayer(120,'Name','fc_1')
 fullyConnectedLayer(84,'Name','fc_2')
 fullyConnectedLayer(10,'Name','fc_3')
 softmaxLayer('Name','softmax')
 classificationLayer('Name','output')];
2. Configure training options using the trainingOptions function. Set parameters like the
optimisation algorithm, learning rate, batch size, and number of epochs.
In this block, you specify the training options. You use stochastic gradient descent with
momentum (sgdm) as your optimization algorithm, set an initial learning rate of 0.0001, and limit
the training to a maximum of 10 epochs. The training data will be shuffled at the start of each
epoch. The progress of training will be displayed as a plot.
% Specify the training options
options = trainingOptions('sgdm', ...
 'InitialLearnRate',0.0001, ...
 'MaxEpochs',10, ...
 'Shuffle','every-epoch', ...
 'ValidationData',imdsValidation, ...
 'ValidationFrequency',30, ...
 'Verbose',false, ...
 'Plots','training-progress');
3. Train the CNN using the trainNetwork function, passing in the image data, the layer definitions,
and the training options. Please keep in mind that it may require some time to train your model.
% Train the network
net = trainNetwork(imdsTrain,layers,options);
CNN Performance Evaluation
4. After the network is trained, it's used to classify the images in the validation dataset. The classify
function is used to perform this classification. The accuracy of the network on the validation
images is then calculated and printed. This is the proportion of images that were correctly
classified by the network. It's calculated as the number of correctly classified images divided by
the total number of images in the validation set.
% Classify validation images and compute accuracy
32
YPred = classify(net,imdsValidation);
YValidation = imdsValidation.Labels;
accuracy = sum(YPred == YValidation)/numel(YValidation);
fprintf('Accuracy of the network on the validation images: %fn', accuracy);
VI. Task 3: Improving the Performance of the CNN Algorithm
Guidance on several methods to improve the image classification accuracy:
1. Regularisation Techniques: In your quest to enhance the CNN accuracy, a key
strategy involves regularisation methods such as L1 and L2 norm regularisation.
For understanding of L1 and L2 norm regularisation, please refer to [Tewari, 2021]
and [Bilogour et al, 2023] from the Kaggle tutorial. The L1 and L2 norm
regularisation methods are instrumental in overcoming overfitting. You can
implement L1 and L2 regularisation in MATLAB by adjusting the WeightL1Factor and
WeightL2Factor properties within the fullyConnectedLayer:
fullyConnectedLayer(120,'Name','fc_1', 'WeightL1Factor', 0.001, 'WeightL2Factor', 0.001)
Additionally, dropout is another valuable regularisation method for reducing
overfitting in neural networks. It can be integrated using the dropoutLayer function,
further helping in the model's robustness. For your report present results with two
regularisation methods, e.g. with L1 or L2 norm regularisation or dropout or
combinations of them.
2. Activation Functions: Another way to improve the CNN accuracy and efficiency is
with different activation functions such as ReLU, Sigmoid, or Tanh. These functions
play a pivotal role in enabling neural networks to capture complex data patterns and
nonlinear relationships. This can be achieved by integrating functions such as
reluLayer for ReLU activation. For your report present results with two activation
functions.
3. CNN Architecture Exploration: Delving into various CNN architectures is a critical
step towards optimising your network’s performance. You can explore modifications
such as altering the number of convolutional or fully connected layers, or
experimenting with different types of pooling operations, such as max pooling and
average pooling. For a more advanced exploration, you might consider studying
complex architectures such as AlexNet (Krizhevsky et al., 2012). Please keep in
mind that increasing the model’s complexity may reduce computational efficiency.
The function dropoutLayer defines a dropout layer to your model.
4. Hyperparameter Tuning: The fine-tuning of hyperparameters, including the
learning rate and the number of epochs, is an essential aspect of enhancing CNN
performance. Adjusting these parameters can lead to significant improvements in
training efficacy and model accuracy. However, it is crucial to be aware of the
potential for overfitting, particularly when increasing the number of epochs. To
counteract this effect, you might need to incorporate regularisation methods, which
can help in maintaining a model that generalises well to new data. The process of
hyperparameter tuning is iterative and requires careful observation and analysis to
identify the optimal configuration for your specific model and dataset. For your
report present results with two sets of hyperparameters – those that give the best
results and another set that provides less accuracy and efficiency.
33
The methods outlined above are recommended and could also be combined for potentially
enhanced outcomes. Furthermore, you are encouraged to experiment with other innovative
approaches to refine your CNN's accuracy. It is essential to conduct a thorough analysis of
your results, delving into the implications and effectiveness of each method employed, to
gain deeper insights into your model's performance.
Please complete your report with the following tasks:
1. Provide your results of the accuracy and the analysis in your report.
2. Calculate the Precision, Recall, and the F1 score of your classification results.
3. Can you improve the results? Please explain how you improve the accuracy and
analyse the results in detail.
References
[Alzubaidi et al., 2021] Alzubaidi, L., Zhang, J., Humaidi, A.J. et al. Review of Deep
Learning: Concepts, CNN Architectures, Challenges, Applications, Future
Directions, Journal of Big Data, Vol. 8, No.53, 2021.
[Annika et al., 2021] R. Annika, E. Matthias, T. Minu D, S. Carole H, R. Tim, A. Michela, A.
Tal, B. Spyridon, C. M Jorge, C. Veronika, et al. Common Limitations of Image
Processing Metrics: A Picture Story. arXiv preprint arXiv:2104.05642, 2021.
[Digit Data, 2023] Data Sets for Deep Learning, Available on: Link, Mathworks, Visited 1
Dec. 2023.
[Bilogour et al, 2023] A. Bilogur, L1 Norms versus L2 Norms, Kaggle, Link, Visited 1 Dec.
2023.
[Lecun et al., 1998] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, Gradient-based Learning
Applied to Document Recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-
2324, Nov. 1998.
[Sokolova and Lapalme, 2009] M. Sokolova and G. Lapalme, A Systematic Analysis of
Performance Measures for Classification Tasks, Information Processing &
Management, vol. 45, no. 4, pp. 427-437, 2009.
[Krizhevsky et al., 2012] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet
Classification with Deep Convolutional Neural Networks, in Advances in Neural
Information Processing Systems, vol. 25, 2012.
[Tewari, 2021] U. Tewari, Regularization – Understanding L1 and L2 Regularization in Deep
Learning, Nov. 2021, Available on: Link, Visited 1 Dec. 2023.
[de Boer et al., 2005] P.T. de Boer, D.P. Kroese, S. Mannor, et al., A Tutorial on the CrossEntropy Method, Ann Oper Res 134, 19–67, 2005, Available on: Link, Visited 4 Dec.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

標(biāo)簽:

掃一掃在手機(jī)打開當(dāng)前頁
  • 上一篇:代做ICT239、代寫Python程序設(shè)計
  • 下一篇:COMP 315代寫、Java程序語言代做
  • 無相關(guān)信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風(fēng)景名勝區(qū)
    昆明西山國家級風(fēng)景名勝區(qū)
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網(wǎng)下載 歐冠直播 WPS下載

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責(zé)聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    主站蜘蛛池模板: www.avcao| 国产视频123 | 国产一卡二卡在线 | 日本www色 | 可以免费观看的av网站 | 色日韩| 免费高清欧美大片在线观看 | 影音先锋在线视频观看 | www日韩精品| 91性色| 狠狠干狠狠干狠狠 | 精品一区在线播放 | 一 级 黄 色 片免费网站 | 亚洲精品成人区在线观看 | 福利视频在线免费观看 | 探花视频在线版播放免费观看 | 91免费福利 | 国语久久| 性欧美久久久 | 欧美一二区 | 夜夜操免费视频 | 你懂的欧美| 男人天堂视频网站 | 久久91精品 | 91视频社区| 国产一级一片免费播放放a 在线观看成人 | 五月天婷婷丁香花 | 精品啪啪 | 国语播放老妇呻吟对白 | 天堂中文字幕免费一区 | 成人午夜视频在线 | 黄色一级视频免费看 | 亚洲精品久久久蜜桃网 | 波多野结衣视频播放 | 99ri国产在线 | 日韩免费三级 | а天堂中文在线官网 | 啪啪小视频网站 | 欧美性猛交乱大交3 | 国产精品国产精品国产专区不蜜 | 青青操操 | 欧美一级激情 | 热久久免费 | 毛片视屏 | 亚洲小说综合网 | 成人aaaaa| 激情av在线播放 | 欧美一区二区视频在线观看 | 国产精品99久久久久久久 | 加勒比成人av | 久久国产视频一区 | 午夜av在线免费观看 | 六月婷婷色 | 成人天堂资源www在线 | 欧美在线视频一区二区 | 午夜激情福利 | 欧美黄在线观看 | 伊人色在线 | 天天操天天撸 | 欧美极品第一页 | 一级黄色片免费在线观看 | 夜夜爱网站 | 黄色av大片 | 久久免费视频观看 | 中文字幕3区 | www.爆操 | 亚洲黄色a级片 | av中文字幕网| 国产尻逼| 青娱乐国产精品 | 日韩一区二区三区av | 一级黄色片免费观看 | 青青草国产在线视频 | 极品粉嫩国产48尤物在线播放 | 国产黄色片在线观看 | 加勒比视频在线免费观看 | 影音先锋国产在线 | 伊人夜夜 | 精品福利视频导航 | 最近中文字幕2019在线一区 | 免费a在线观看播放 | 日韩精品视频免费 | 99热这里只 | 青青久操| 欧美视频中文字幕 | 国产视频久久 | 色一情一交一乱一区二区三区 | 欧美综合一区二区三区 | 婷婷的五月 | 99热这里只有精品5 国产精品自偷自拍 | 一区二区在线不卡 | 亚洲成a| 成人激情综合 | 香蕉精品在线 | 午夜影视剧场 | 国产成人综合视频 | 久久久久国| 亚洲超丰满肉感bbw 日韩欧美日韩 | 日本网站在线播放 | 色一区二区三区 | 天堂中文在线看 | 亚洲毛片在线观看 | 中文字幕日本在线观看 | 亚洲一级特黄毛片 | 亚洲污污视频 | 欧美91成人网 | 免费午夜av | 九久久久久 | 九草视频在线观看 | 亚洲人成网77777 | 亚洲二区在线 | 亚洲少妇一区二区三区 | 污污视频免费看 | 亚洲激情自拍 | 夜夜视频 | 国产精品一级视频 | 亚洲欧洲久久久 | 91在线免费视频观看 | 国产天堂在线 | 影音先锋男人的天堂 | 久久xxxx| 鲁一鲁av| 插骚| 国产视频福利一区 | 久青草视频在线 | 久久国产精品99久久人人澡 | 不卡视频在线 | 国产男人的天堂 | 一区二区三区日韩在线 | 精精国产xxxx在线观看主放器 | 欧美亚洲在线 | 亚洲天堂男人av | 亚洲视频在线播放 | 一级片久久久久久久 | 影音先锋亚洲精品 | 色婷婷久久一区二区三区麻豆 | 免费啪啪小视频 | 成人先锋av | 国产亚洲精品久 | 一区二区三区视频在线观看 | 九九视频免费观看 | 亚洲天堂男人的天堂 | 国产精品国产三级国产普通话对白 | 欧美老肥妇做.爰bbww视频 | 午夜成年人视频 | 五月婷婷狠狠爱 | 先锋av网 | 2015成人永久免费视频 | 国产欧美久久久精品免费 | 亚洲宗合网 | 久久午夜国产精品 | 男女一进一出视频 | 两性午夜视频 | 成人免费毛片入口 | 在线欧美二区 | 性国产1819sex性高清 | 在线观看a网站 | 蜜美杏av| 久久综合爱| 影音先锋成人资源 | 亚洲久草视频 | 99自拍网| 亚洲精品乱码久久久久 | 成熟女人毛片www免费版在线 | 亚洲乱码国产乱码精品 | 在线看mv的网址入口 | 美女毛片视频 | 快色网站| 久久精品国产成人av | 国产精品理论片 | 日韩欧美亚洲国产 | 国产成人高清 | 免费在线a| 欧美日韩综合在线观看 | 免费观看的av | 潘金莲一级淫片aaaaaa播放 | 99re在线视频播放 | 国产三级a | yjizz国产| 久久中文在线 | 免费观看黄色小视频 | 国产精品久久久久久一二三四五 | 丝袜美腿啪啪 | 视频免费观看在线 | 国产字幕在线观看 | av免费在线不卡 | 成人不卡视频 | 国产精品美女久久久久久免费 | 麻豆毛片| 国产做受91 | 成人在线视频网站 | 亚洲天堂2013 | 天天插插插 | 国产第一页第二页 | 狼人伊人av | 天天色天 | 色婷婷狠狠操 | 欧美一区二区在线视频 | 国产精品99re | 婷婷在线免费 | 720url在线观看免费版 | 9i在线看片成人免费 | 日韩一级在线播放 | 毛片看看 | 44444kk在线观看三免费 | 一区不卡在线观看 | 99精品国产一区二区 | 黄页网站在线播放 | 欧美成在线观看 | 亚洲国产精品久久久久爰性色 | 国产高清免费在线观看 | 欧美一区二区日韩 | 国产清纯白嫩初高中在线观看性色 | 亚洲欧美日韩影院 | 羞羞的铁拳在线观看 | 97成人精品| 日本天堂在线 | 国产第8页| 一区二区久久久久 | 91娇羞白丝 | 亚洲少妇视频 | 成人国产精品 | 成人免费视频网站 | 精品日韩中文字幕 | 欧洲成人在线 | 日韩中文视频 | 日本肉体xxxx裸体137大胆图 | av国产精品| eeuss鲁一区二区三区 | 碧蓝之海动漫在线观看免费高清 | 五月天丁香视频 | www.欧美日韩| 欧美成人免费看 | 日产av在线播放 | 成人免费福利视频 | 91丨九色丨蝌蚪丨对白 | 久久视频坊 | 男人的天堂黄色 | 人人人超碰 | 999久久久国产精品 欧美大片一区二区三区 | 天天操婷婷 | 精久久久久久 | 爱情岛论语亚洲入口 | 在线观看wwww | 中文字幕男人天堂 | 国产喷潮 | 曰韩黄色一级片 | 丁香六月色 | 日韩色图在线观看 | 亚洲激情在线观看 | 免费在线视频观看 | 爱逼综合 | 一区二区高清在线 | 国产又粗又猛又黄又爽的视频 | 一级黄大片 | 日韩成人精品视频 | 激情综合五月天 | 成人春色激情网 | 操网站| 成人免费看片又大又黄 | 亚洲午夜精品久久久 | 射一射| 免费一级黄色片 | 9999久久久久| 在线播放免费人成毛片乱码 | 91亚洲一区 | 国 产 黄 色 大 片 | 欧美性猛交xx乱大交 | 欧美精品乱码99久久蜜桃 | 成人h在线播放 | 不卡视频免费在线观看 | 91网入口| 91污片| 日韩欧美大片 | 在线观看免费视频一区 | 亚洲成a人v | 日本国产高清 | 伊人色播 | 特级黄色片| 欧美精品在线视频观看 | 日韩精品一区二区三区四区五区 | 91不卡在线 | 99热国产在线观看 | 香蕉网在线观看 | 日韩毛片在线 | 精品国产九九九 | 麻豆国产精品777777在线 | 日本高清视频一区 | 成人亚洲国产 | 九九九九色| 欧美日韩成人一区二区 | 欧美毛片基地 | 国产区二区| 久操伊人网 | 婷婷6月天 | 国产精品久久一区二区三区 | 美女三级视频 | 国产小视频在线播放 | www三级| 377人体粉嫩噜噜噜 亚洲欧美色图片 | 欧美激情免费 | 在线国产视频 | 综合色网站 | 欧美日韩毛片 | 911香蕉视频| www.五月婷婷| 国产一级片在线 | 国产色网| wwwwww国产| 免费一级欧美片在线播放 | 人人草在线观看 | 亚洲激情区 | 波多野结衣视频观看 | 免费污视频 | 欧美视频一二三区 | 激情亚洲视频 | 欧美超碰在线 | 黄色精品 | www.成人精品 | 六月婷婷在线 | 黄色片在线免费观看视频 | 青青草毛片 | 91在线网| 蜜桃香蕉视频 | 精品视频一区二区在线观看 | 九九色精品 | 国产精品一区二区免费视频 | 粉嫩一区二区三区 | 久草网在线 | 啪啪网站免费 | 波多野结衣黄色网址 | 嫩嫩av| 久久精品综合网 | 日韩中文字幕av | 欧美日韩国产一区 | 少妇激情一区二区三区视频 | 亚洲黄色一区二区三区 | 91精品国产福利一区二区三区 | 在线中文字日产幕 | 国产素人在线 | 亚洲精品天天 | 成年人看的视频网站 | 国产欧美精品一区二区色综合朱莉 | 五月天婷婷社区 | www.伊人 | 丰满少妇一区二区三区 | 国产精品成人久久久 | 激情综合区 | 伊人婷婷综合 | 日日操夜夜操天天操 | 亚洲经典视频在线观看 | 四虎成人免费视频 | 国产在线观 | 国产乱人伦偷精品视频不卡 | 亚洲va欧美va久久久久久久 | 免费黄av| 国产午夜在线播放 | 丁香花高清在线 | 国产精品资源站 | 台湾swag在线播放 | 色一情一乱一伦一区二区三区 | 久久久久久久久久久网站 | 国产亚洲欧美一区二区 | 青青草免费观看 | 精品免费一区二区 | h视频网站在线观看 | 免费99视频| 一区在线看 | 日韩av一区二区在线观看 | 欧美日韩三级在线观看 | 五月六月丁香 | 日本在线视频观看 | 波多野结衣一二三区 | 欧美大片在线看免费观看 | 欧美视频在线观看一区二区 | 亚洲激情第一页 | 香蕉啪啪网 | 五月婷婷基地 | 五月天开心激情 | 亚洲第一视频网站 | 网站在线播放 | 俄罗斯porn| 亚洲免费在线观看视频 | 国产精品国产一区二区三区四区 | 日韩在线视频网 | 国产成人精品影院 | 一本久草 | 亚洲精品国产精品国自产观看 | www好男人 | 精品一区二区三区免费视频 | 国产网站免费观看 | 欧美三级成人 | 亚洲播播| 国产精品入口牛牛影视 | 在线a| 成人短视频在线观看 | 亚洲涩色| 一本一道久久久a久久久精品91 | 亚洲成人自拍偷拍 | 日韩欧美中文字幕在线播放 | 亚洲国产字幕 | 91av观看| 国产黄三级三级三级三级一区二反 | 国产精品伦一区二区在线 | 亚洲国产精品18久久久久久 | 国语一区 | 九九看片 | 欧美日韩亚洲国产精品 | 日日爽夜夜爽 | 二区免费视频 | 四虎成人在线视频 | 欧美人妖另类 | 美女视频一区二区三区 | 色资源在线观看 | 欧美成人精品二区三区99精品 | 中文字幕在线人 | 婷色| 国产超碰人人 | 偷拍亚洲欧美 | 国产99999 | 97久色| 手机看片日韩欧美 | 免费吸乳羞羞网站视频 | 韩国av片永久免费 | 国产91一区 | 成人一区三区 | 国产手机精品视频 | 久久久性高潮 | 性俄罗斯交xxxxx免费视频 | 欧美日韩成人精品 | 久久小视频 | 可以在线观看的av网站 | 欧美福利视频在线 | 九热精品| 精品九九九九九 | 成人福利午夜 | 欧美精彩视频 | 一级国产片 | 超碰99在线| 狠狠干天天 | 特黄av | 亚洲午夜免费 | 色网站在线免费观看 | 欧美成人综合在线 | 欧美三日本三级少妇99 | 在线观看成人黄av免费 | 亚洲精品久久久久久久久久久久久 | 成人高潮视频 | 又色又爽又高潮久久精品 | 欧美成人激情在线 | 国产超碰av | 久久久极品 | a国产| 四虎影音先锋 | 灌满闺乖女h高h调教尿h | 天天综合色网 | 成人区视频 | 亚洲美女偷拍 | 日本手机看片 | 又色又爽又高潮久久精品 | 日日干日日插 | 国产视频观看 | aaa午夜 | 免费一级特黄特色大片 | 天天操天天干天天 | 亚洲欧美日韩在线看 | 亚洲精品国产片 | 午夜在线不卡 | 黄色一级片免费看 | 日日爽夜夜爽 | 日韩免费一级 | 18欧美性xxxx极品hd | 男女国产视频 | 一区中文字幕 | 九九热在线播放 | 亚洲激情五月婷婷 | 国产主播喷水 | 久久一区二区三区精品 | 青青草原av在线 | 天天干网站| 韩国性猛交╳xxx乱大交 | 女人18毛片水真多18精品 | www.av视频在线观看 | 免费在线视频观看 | 不卡中文 | 色午夜av| av黄色影院 | 日韩久久免费 | 桃色激情网| 精品久久精品久久 | 日本少妇中出 | 国产精品自拍小视频 | 亚洲专区视频在线观看 | 欧美精品黑人猛交高潮 | 黄色片免费观看 | 高清日韩 | 黄色一级视屏 | 四虎图库| 尤物视频在线 | 午夜精品久久久久久久久 | 国产免费一区二区三区免费视频 | 日韩精品网站 | 在线观看色 | 亚洲国产精品自拍 | 亚洲黄色自拍 | 免费视频污 | 一本大道久久精品懂色aⅴ 久久久久久亚洲欧洲 | 国产精品久久久久久一二三四五 | 91色在线视频 | 日韩女同互慰一区二区 | 97骚碰 | 日韩免费高清视频网站 | 粉嫩色av | www.色多多 | 嫩草影院一区二区三区 | 天天干视频 | 50度灰在线 | 成人av手机在线 | 在线观看免费黄色av | 国产午夜精品一区 | 快射天堂网 | 亚洲草逼视频 | 刘亦菲毛片一区二区三区 | 操她视频网站 | 夜夜视频 | 亚洲一区中文字幕在线观看 | 一区二区国产精品视频 | 亚洲天堂影院在线观看 | 国产视频在 | 国产精品国产馆在线真实露脸 | 一区二区高清在线 | 中文字幕在线人 | 亚洲国产成人一区二区精品区 | 在线观看黄a | 久久全国免费视频 | 国产一级片久久 | 久久免费播放视频 | 九月婷婷 | 国产精品视频在线观看免费 | 亚洲欧洲一区二区三区 | 在线观看黄色 | 国产精品成人免费一区二区视频 | 国产在线xx | 国产免费叼嘿网站免费 | www.爆操| 另类毛片 | 男人天堂最新网址 | 日韩精品一区二区三区视频 | 在线观看视频亚洲 | 黄色国产精品 | 国产香蕉久久精品综合网 | 波多野结衣成人在线 | 伦在线| 人人干在线视频 | 国产又大又粗又长 | 四虎影院在线观看免费 | 亚洲麻豆av | 国产女女调教女同 | 久色88| 久久99精品国产.久久久久 | 狠狠干导航 | 国产精品国产一区二区三区四区 | 91超碰在线免费观看 | 欧美一区二区三区在线视频 | 亚洲欧美国产另类 | www久久com | 免费在线日本 | 久久久久久久久99精品 | 久久久久久国产精品日本 | 一级黄色免费观看 | 制服丝袜第一页在线观看 | 国产主播精品 | 91欧美精品 | a一级黄色片 | 91蜜桃在线| 三级91| 一级绝黄 | 一区二区三区视频免费在线观看 | 国产91在线视频 | 在线污视频| 久久av综合 | 免费大片黄在线观看视频网站 | 99久久精品国产一区二区成人 | 亚洲视频在线一区 | 久久久久久久久久久久一区二区 | 亚洲无吗视频 | 99精品视频免费 | 亚洲永久精品ww.7491进入 | 亚洲超碰av| 香蕉视频在线看 | 亚洲国产精品自拍视频 | 国产精品美女www爽爽爽 | 久久中文字幕免费 | 一级做a爱 | 亚洲自啪 | 欧美午夜激情视频 | 中文字幕在线一 | 亚洲欧美日韩国产精品一区午夜 | 日本一区二区三区精品 | 成人免费视频a | 伊人久久亚洲 | 精品在线观看视频 | 美女久久久久 | 精品福利在线观看 | 最近中文字幕免费视频 | 亚州欧美在线 | 精品国产青草久久久久96 | 国产精品久久777777毛茸茸 | www.97ai.com| 亚洲一级网站 | 成人福利视频在线 | 青草伊人久久 | 国产精品久久999 | 久久9999久久| 天天插天天干天天操 | 欧美午夜三级 | 国产免费a视频 | 97超碰福利 | 成人区视频 | 免费av一区二区三区 | 天天干天天操天天爱 | 国产手机在线播放 | 天堂中文字幕在线观看 | 亚洲一区二区三区久久久 | 日韩中文网 | 亚洲乱码国产乱码精品精的特点 | 精品乱码一区二区三区 | 国产激情网 | 成人毛片在线精品国产 | 成人久久久 | 日本四虎影院 | 欧美日韩成人在线观看 | av五月 | www视频在线观看网站 | 女教师高潮黄又色视频 | 亚洲午夜精品在线 | 中国毛片基地 | 亚洲一区二区三区在线 | 法国性xxxx精品hd | 久久国产欧美 | 吻胸摸激情床激烈视频 | 国语av| 亚洲欧美伦理 | 亚洲调教视频 | 国产欧美一区二区三区鸳鸯浴 | 国产精品影片 | 亚洲专区免费 | 色婷婷av777| 尤物yw午夜国产精品视频明星 | 在线观看mv的中文字幕网站 | 亚洲视频男人的天堂 | 天天天天躁天天爱天天碰2018 | 91文字幕巨乱亚洲香蕉 | 97超碰碰| 男女啪啪软件 | 国产一二视频 | 国产一区二区三区影视 | 国产16处破外女视频在线 | 亚洲爱色 | 免费99精品国产自在在线 | 午夜视频在线免费看 | 黄色a级大片 | 国产裸体永久免费视频网站 | 91蜜桃网 | 亚洲永久精品ww47 | 天天爱综合网 | 九九热这里有精品视频 | 影音先锋国产在线 | 激情小视频 | 成人小视频免费观看 | 一级大片免费观看 | 国产免费一区二区三区在线观看 | 中文字幕在线观看视频免费 | 免费一级片网站 | 在线观看国产精品入口男同 | 超碰在线视屏 | 欧美激情片在线观看 | 精品在线视频免费 | 国产精品日韩av | 亚洲高潮av| 北岛玲av在线 | 国产精品久久久久久久久久久免费看 | 在线免费中文字幕 | 在线免费国产视频 | 顶级嫩模啪啪呻吟不断好爽 | 黄色视屏网站 | 97青草| 亚洲女人在线 | 精品无码久久久久久国产 | 波多野一区二区 | 国产精品第十页 | 中文字幕亚洲欧美日韩在线不卡 | 国产另类自拍 | 亚洲一级特黄毛片 | 天天干夜夜夜夜 | 四房婷婷 | 黄色片在线视频 | 国产自产 | 九九热这里有精品视频 | 日本丰满少妇 | caoporn国产一区二区 | 丰满放荡岳乱妇91ww | 成人在线精品 | 国产精品老女人 | 日本一区二区三区在线播放 | 欧美黄色一级大片 | 日韩成人黄色 | 国产一二区在线观看 | 日韩黄网 | 亚洲暴爽 | 美女免费黄色 | 男女啊啊啊 | 新宿事件粤语在线观看完整免费观看 | 久久九九国产精品 | 久久人人爽人人爽人人片亚洲 | 在线 丝袜 欧美 日韩 制服 | 亚州成aⅴ人国产毛片久久 国内精品久久久久久影视8 | 日本午夜视频在线观看 | 色婷婷久久一区二区三区麻豆 | 香蕉视频最新网址 | 久久精品三级 | 亚洲欧美一区二区三区在线观看 | 日日日夜夜操 | 91丨porny丨| 亚洲精品国产精品国自产网站 | 久久久噜噜噜久久久白丝袜 | 国产免费一区二区三区四区五区 | 日韩精品午夜 | 青娱乐在线免费观看 | 无码一区二区三区视频 | 久久精品h| 天天操 夜夜操 | 国产免费的av | 免费福利在线观看 | 最近中文字幕免费在线观看 | 毛片直接看 | 国产精品资源在线观看 | 超碰五月 | 亚洲免费二区 | 思思99re | 亚洲欧美日韩国产精品一区午夜 | 亚洲欧洲综合在线 | 婷婷色综合 | 亚洲精品国产福利 | 国产免费小视频 | a在线观看免费 | 中文字幕久久亚洲 | 在线国产一区二区三区 | 国产一级一片免费播放放a 在线观看成人 | 久久久天堂国产精品女人 | 亚洲永久精品一区二区三区 | 中文字幕在线观看视频网站 | 成人一区二区三区 | 在线观看欧美一区二区三区 | 懂色av一区二区在线播放 | 亚洲国产精品欧美久久 | 亚洲蜜桃精久久久久久久 | 成年人黄色录像 | 国产成人久久精品麻豆二区 | 日韩不卡视频在线 | av软件在线观看 | 成人精品久久 | 草草影院在线观看 | 欧美在线视频免费观看 | 国产网站黄 | 啪视频在线观看 | 日韩精品高清视频 | 九九色精品 | 亚色中文字幕 | 亚洲免费网址 | 性69无遮挡免费视频 | 欧美日韩久久久久久 | 中文字av| 超碰一级片 | 爱爱综合网 | 国产精品日日做人人爱 | 三级黄色片网站 | 亚洲欧美在线视频观看 | 亚洲乱操| 欧美88av| 国产女人高潮毛片 | 久草中文视频 | 国产一级在线看 | 久久青娱乐 | 一级片播放 | a√天堂中文字幕在线 | 亚洲蜜臀av乱码久久精品蜜桃 | 国产黑丝在线播放 | 欧美第一视频 | 天天爽天天插 | 国产网站av | 亚洲区视频在线 | 在线观看亚洲天堂 | 久久天堂 | 亚洲免费不卡视频 | 久久精品免费 | 国产精品欧美久久久久一区二区 | 国产精品久久婷婷六月丁香 | 99久久99九九99九九九 | 国产精品久久久久毛片软件 | 亚洲精选免费 | 久久一久久 | 蜜桃精品在线观看 | 国产精品国产三级国产普通话蜜臀 | 青青草55 | 中文字幕视频播放 | 亚洲国产天堂av | 日韩久久久久久久久 | 九九色综合网 | 欧美福利一区二区 | 888奇米影视 | 中文字幕男人天堂 | 伊人网狠狠干 | 97超碰香蕉 | 夜夜夜操操操 | 93看片淫黄大片一级 | 国产精品日日做人人爱 | 六月婷婷在线观看 | 视频在线免费观看 | 欧美绿帽合集videosex | jiz亚洲 | 可以免费看的毛片 | 加勒比视频在线观看 | 殴美一级特黄aaaaaa | 国产成人亚洲综合 | 93久久精品日日躁夜夜躁欧美 | 成人精品综合 | 亚洲精品视频观看 | 中国黄色a级片 | 国产乱码精品一区二区三区爽爽爽 | 97视频在线观看免费 | 免费看日韩 | www.五月.com| 国产三级a| 91在线亚洲 | 免费日本黄色 | 综合xx网 | 午夜影院久久 | 色多多污污| 午夜视频免费在线观看 | 亚洲永久精品国产 | av在线免 | 欧美日韩免费在线观看 | 日本在线观看 | 日韩免费观看一区二区 | 亚洲成人精品一区 | 手机在线不卡av | 亚洲欧美国产高清va在线播放 | 国产页 | 一级黄色在线视频 | 91中文字幕 | 国产精品久久久久久妇女6080 | 成人黄色免费网站 | 日日日操操操 | 国产日皮视频 | 欧美极品在线视频 | 成人激情视频网 | 中年夫妇大白天啪啪高潮不断 | 国产乱码精品一区二区三区精东 | 欧美高清69hd | 精品视频一区二区在线观看 | 国产麻豆一区二区三区 | 瑟瑟综合网| 欧美在线观看免费高清 | 欧美成人综合网站 | 欧美精品一区二区蜜桃 | 国产原创精品 | 超碰超碰 | 亚洲三级久久 | 国产激情一区二区三区 | 日韩在线观看视频网站 | 中文字幕avav | 一区二区三区免费在线观看视频 | 日韩精品一区二区三区四区五区 | 小草av | 福利视频一区二区 | 91av不卡| 免费人成在线 | 黄色一级视屏 | 可以看毛片的网站 | 国产视频在线看 | 91久久爽久久爽爽久久片 | 午夜精品久久久久久久99热黄桃 | 亚洲成人手机在线 | 国产人人干 | 亚洲激情av | 不卡影院 | 亚洲午夜黄色 | av不卡在线看 | 影音先锋午夜 | 在线不卡毛片 | 国内精品偷拍视频 | 十大污视频 | 久久精品男人的天堂 | 欧美日日日 | 波多野结衣之潜藏淫欲 | 最新免费黄色网址 | 一区二区在线视频播放 | 五月天激情综合 | 色91视频 | 国产精品人人做人人爽 | 久久国产视频网站 | 91偷拍精品一区二区三区 | 性刺激网站 | 国产精品一区二区三区免费 | 成人精品在线看 | 亚洲精品国产精品乱码 | 夜夜性日日交xxx性视频 | 亚洲欧美日韩精品永久在线 | 一区二区久久精品66国产精品 | 成人女同在线观看 | 欧美激情片在线观看 | 成人网在线播放 | 青青国产在线视频 | 亚洲午夜精品一区二区三区他趣 | 亚洲欧美日韩国产精品一区午夜 | 日韩视频在线免费 | 男人的天堂欧美 | 成人性生生活性生交3 | 色呦呦视频| 91网站在线观看视频 | 奇米91| 日韩成人片 | 免费欧美一级 | www.haoav| juliaann艳妇精品hd | 3d欧美精品动漫xxxx无尽 | 欧美乱大交 | 亚洲五级片 | 成人av手机在线观看 | 在线播放精品视频 | 成人高潮视频 | 97超碰在线播放 | 91久久免费视频 | 在线观看免费国产精品 | 欧美一级高潮片 | 国产欧美日韩在线播放不了吗 | 海量av| 亚洲毛片网| 香蕉综合在线 | 成人日b视频 | 黄色一级a毛片 | 免费一级肉体全黄毛片 | 成人免费视频国产免费麻豆 | 国产精品―色哟呦 | 亚洲爽爆 | 天天射天天色天天干 | 黄色福利| 91搞| 婷婷色九月 | 黄色小说视频网站 | 亚洲人成人一区二区在线观看 | 国产肉体xxxx裸体高清 | 老外一级黄色片 | 小说肉肉视频 | 色婷婷基地| 国产区91 | 日韩一级淫片 | 亚洲免费黄色网址 | 亚洲成色www,久久网站 | 好吊操这里有精品 | 久久免费少妇高潮99精品 | 免费看日韩av| 久久精品视频99 | 波多野结衣一区二区三区四区 | 亚洲综合爱 | 99爱免费视频 | 国产又黄又爽又色 | jyzz中国jizz十八岁免费 | 成人一区二区三区视频 | 亚洲成色在线 | 男女插鸡视频 | 岛国av一区二区三区 | 国产欧美精品一区二区色综合朱莉 | 亚洲一区二区三区乱码aⅴ蜜桃女 | 亚色成人| 一区二区不卡视频 | 国产精品久久久久久久午夜 | 性欧美一级 | 国产免费黄色大片 | 欧美jizz18性欧美 | 亚洲玖玖爱| 国产精品视频一区二区三区不卡 | 高清亚洲 | 天天干,夜夜操 | 五月天激情婷婷 | 91蜜桃视频| 少妇高潮露脸国语对白 | 中文在线观看免费 | 欧美片17c07.com | 日本69少妇 | 欧美一二三级 | 婷婷一级片 | 三级视频久久 | 五月婷婷亚洲 | 人人超碰人人 | 国产中文字幕免费 | 3p视频在线 | 五月天婷婷网站 |