国产一区二区三区香蕉-2020国产成人精品视频-欧美日韩亚洲三区-www.91桃色-最美情侣中文第5季免费观看-久草毛片-国产成人精品av-男女猛烈拍拍拍无挡视频-中文字幕看片-色视频欧美一区二区三区-久久久久久久久久影院-一级a爱片久久毛片-精品久久久久久无码中文字幕一区-欧美色图网站-无码色偷偷亚洲国内自拍-国产一区在线免费观看

代做ACS61012、代寫ACS61012 Machine Vision

時間:2024-03-10  來源:  作者: 我要糾錯



Coursework for ACS61012 “Machine Vision”
The purpose of the lab sessions is to give you both theoretical and practical skills in machine vision and especially in image enhancement, image understanding and video processing. Machine vision is essential for a number of areas - autonomous systems, including robotics, Unmanned Aerial Vehicles (UAVs), intelligent transportation systems, medical diagnostics, surveillance, augmented reality and virtual reality systems.
The first labs focus on performing operations on images such as reading, writing calculating image histograms, flipping images and extracting the important colour and edges image features. You will become familiar how to use these features for the purposes of object segmentation (separation of static and moving objects) and for the next high-level tasks of stereo vision, object detection, classification, tracking and behaviour analysis. These are inherent steps of semi-supervised and unsupervised systems where the involvement of the human operators reduces to minimum or is excluded.
Required for Each Subtask
Task 1: Introduction to Machine Vision
For the report from Task 1, you need to present results with:
From Lab Session 1 – Part I
● The Red, Green, Blue (RGB) image histogram of your own picture and analysis the histogram. Several pictures are provided, if you wish to use one of them. Alternatively, you could work with a picture of your choice. The original picture needs to be shown as well. Please discuss the results. For instance, what is the differences between the histograms? What do we learn from the visualised red, green and blue components of the image histogram?
Files: Lab 1 - Part I - Introduction to Images and Videos.zip and Images.zip. You can work with one of the provided images from Images.zip or with your own image.
From Lab Session 1 – Part II
● Results with different edge detection algorithms, e.g. Sobel, Prewitt and comment on their accuracy with different parameters (threshold, and different types of noise especially). Include the visualisation and your conclusions about static objects segmentation using edge detection (steps 9-11 with Sobel, Canny and {Prewitt operators)) in your report. Visualise the results and draw conclusions.
[8 marks equally distributed between part I and part II]
Task 2: Optical Flow Estimation Algorithm
For the report, you need to:
● Find corner points and apply the optical flow estimation algorithm. (use file Lab 2.zip – image Gingerbread Man). Presents results for the ‘Gingerbread Man’ tasks and visualise the results
[4 marks]
● Track a single point with the optical flow approach (file: Lab 2.zip – the red square image). Visualise the trajectory on the last frame and the ground truth track of ‘Red Square’ tasks.
 1

● Compute and visualise the root mean square error of the trajectory estimated over the video frames by the optical flow algorithm. Compare the estimates with the exact coordinates given in the file called groundtruth. You need to include the results only with one corner. Give the equation for the root-mean square error. Analyse the results and make conclusions about the accuracy of the method based on the root mean square error.
[8 marks]
Task 3: Automatic Detection of Moving Objects in a Sequence of Video Frames
You are designing algorithms for automatic vehicular traffic surveillance. As part of this task, you need to apply two types of approaches: the basic frame differencing approach and the Gaussian mixture approach to detect moving objects.
Part I: with the frame differencing approach
● Apply the frame differencing approach (Lab 3.zip file)
For the report, you need to present:
● Image results of the accomplished tasks
● Analyse the algorithms performance when you vary the detection threshold.
[5 marks]
Part II: with the Gaussian mixture approach For the report, you need to present:
● Results for the algorithm performance when you vary parameters such as number of Gaussian components, initialisation parameters and the threshold for decision making
● Detection results of the moving objects, show snapshots of images.
● Analyse all results – how does the change of the threshold and number of Gaussian
components affect the detection of the moving objects?
Task 4: Robot Treasure Hunting
[5 marks]
 A robot is given a task to search and find “treasures” in imagery
data. There are three tasks: easy , medium and difficult.
. The starting point of the robot search is where the red arrow is. For the medium case the blue fish is the only treasure, for the difficult case the clove and sun are
   2

“treasures” that need to be found. Ideally, one algorithm needs to be able to find the “treasures” from all images, although a solution with separate algorithms is acceptable. For Task 4, in the report, you need to present results with:
● The three different images (easy, medium and difficult showing the path of finding “the treasure”.
● Include the intermediate steps of your results in your report, e.g. of the binarisation of the images and the value of the threshold that you found or any other algorithm that you propose for the solution of the tasks.
● Explain your solution, present your algorithms and the related MATLAB code.
● Include the brief description of main idea of your functions in your report and the
actual code of the functions in an Appendix of your report.
In the guidance for the labs, one possible solution is discussed, but others are available. Creativity is welcome in this task and if you have different solutions, they are welcome.
Here 8 marks are given for the easy task, 10 for the medium task and 12 for the most difficult task.
[30 marks]
Task 5. Image Classification with a Convolutional Neural Network
1. Provide your classification results with the CNN, demonstrating its accuracy and analyse them in your report.
[2 marks]
2. Calculate the Precision, Recall, and the F1 score functions characterising further the CNN performance.
[6 marks]
3. ImprovetheCNNclassificationresults.Pleaseexplainhowyouhaveachievedthe improvements.
[12 marks]
4. Discuss ethics aspects in Computer Vision tasks such as image classification, detection and segmentation. Consider ethics in broad aspects – what are the positives when Ethics is considered. What ethics challenges do ethics poses and how could they be reduced and mitigated? In your answer you need to include aspects of Equality, Diversity and Inclusion (EDI).
[10 marks]
Finally, the quality of writing and presentation style are assessed. These include the clarity, conciseness, structure, logical flow, figures, tables, and the use of references.
[10 marks]
3

 Guidance on the Course Work Submission
You need to submit your report and code that you have written to accomplish the tasks. There are two separate submission links on Blackboard.
Report and Code Submission
There are two submission links on Blackboard: 1) for your course work report in a pdf format and 2) for the requested code in a zipped file.
A Well-written Report Contains:
● A title page, including your ID number, course name, etc., followed by a content page.
● The main part: description of the tasks and how they are performed, including results from all subtasks. For instance: “This report presents results on reading and writing images in MATLAB. Next, the study of different edge detection algorithms is presented and their sensitivity to different parameters...” You are requested to present in Appendices the MATLAB code that you have written to obtain these results. A very important part of your report is the analysis of the results. For instance, what does the image histogram tell you? How can you characterise the results? Are they accurate? Is
there a lot of noise?
● Conclusions describe briefly what has been done, with a summary of the main
results.
● Appendix: Present and describe briefly in an Appendix the code only for tasks 2-
5. Add comments to your code to make it understandable. Provide the full code
as one compressed file, in the separate submission link given for it.
● Cite all references and materials used. Adding references demonstrates additional independent study. Write with own style and words to minimise and avoid similarities. Every student needs to write own independent report.
● Please name the files with your report and code for the submission on Blackboard by adding your ID card registration number, e.g. CW_Report_1101133888 and CW_Code_1101133888.
The advisable maximum number of words is 4000.
Submission Deadline: Week 10 of the spring semester, Sunday midnight
4

Guidance to Accomplish the Tasks
 Lab Session 1 - Part I: Introduction to Image Processing
In this lab you will learn how to perform basic operations on images of different types, e.g. how to read them, convert them from one format to another, calculate image histograms and analyse them.
Background Knowledge
A digital image is composed of pixels which can be thought of as small dots on the screen. We know that all numeric calculations in MATLAB are performed using double (64-bit) floating-point numbers, so this is also a frequent data class encountered in image processing. Some of the most common formats used in image processing are presented in Tables 1 and 2 given below.
All MATLAB functions work with double arrays. To reduce memory requirements, MATLAB supports storing image data in arrays of class uint8 and uint16. The data in these arrays is stored as 8-bit or 16-bit unsigned integers. Such arrays require respectively, one eighth or one-fourth as much memory as data in double arrays.
 Table 1. Data classes and their ranges
Most of the mathematic operations are not supported for types uint8 and uint16. It is therefore required to convert to double for operations and back to uint8/16 for storage, display and printing.
 Table 2. Numeric formats used in image processing
Image Types
I. Intensity Image (Grey Scale Image)
This form represents an image as a matrix where every element has a value corresponding to how bright/ dark the pixel at the corresponding position should be coloured. There are two ways to represent the brightness of the pixel:
5

1. The double class (or data type) format. This assigns a floating number ("a number with decimals") in the range -10308 to +10308 for each pixel. Values of scaled class double are in the range [0,1]. The value 0 corresponds to black and the value 1 corresponds to white.
2. Theotherclassuint8assignsanintegerbetween0and255torepresenttheintensity of a pixel. The value 0 corresponds to black and 255 to white. The class uint8 only requires roughly 1/8 of the storage compared to the class double. However, many mathematical functions can only be applied to the double class.
II. Binary Image
The binary image format also stores an image as a matrix but can colour a pixel as black or white (and nothing in between): 0 – is for black and a 1 – is for white.
III. Indexed Image
This is a practical way of representing colour images. An indexed image stores an image as two arrays. The first array has the same size as the image and one number for each pixel. The second array (matrix) is called colour map and its size may be different from the image size. The numbers in the first matrix represent an instruction of what number to use in the colour map matrix.
IV. RGB Image
This format represents an image with three matrices of sizes matching the image format. Each matrix corresponds to one of the colours red, green or blue and gives an instruction of how much of each of these colours a certain pixel should use. Colours are always represented with non-negative numbers.
Guidance on Performing Lab Session 1 – Part I
Demos in MATLAB
>> demo MATLAB % Opens a window from which you can select a demo for different tools
Workspace and saving results
To see the variables in the workspace: who, whos
To clear the variables in the workspace: clear
To save the variables in the workspace: save name_of_a_file.mat To load the data/ image from a file: load name_of_a_file.mat
Examples of Reading images in MATLAB
>> clear all % Clears the workspace in MATLAB
>> I = imread('Dog.jpg'); %
 >> size(I)
>> imshow(I);
>> Ig = rgb2gray(I);
>> imshow(Ig)
% Gives the size of the image
% Visualises the image
% Converts a colour image into a grey level image
1. The first line clears all variables from the workspace
2. The second line reads the image file into a 3 dimensional array (x, y, color). MATLAB
can read many image file formats, so you do not have to worry about the details
3. Next, we will have information about the image size of the image
4. Visualise the colour image
5. This line converts an RGB image into a grey image. This is not necessary if the image
is already a grey level image.
6. Visualisethegreyimage
6

Writing Images in MATLAB
Images are written to disk using function imwrite, which has the following basic syntax:
imwrite(I,’filename’)
The string in filename must include a recognised file format extension (tiff, jpeg, gif, bmp, png or xwd).
>> imwrite(I,’Dog1.jpg’); % The string contained in filename
Next, you can check the information about the graphics file, by using imfinfo.
Type: imfinfo Dog.jpg
Use the commands, whos and ls to visualise the variables in the workspace.
Changing the Image Brightness
Change the brightness of your image by adding a constant value to all pixel values, resp. by subtracting a constant value to all pixel values. For instance:
>> I_b = I – 100;
>> figure, imshow(I_b) >> I_s = I + 100;
>> figure, imshow(I_s)
Flipping an Image
Apply flipLtRt.m function (provided) to your image to flip an image. Visualise the results.
Detection of an Area of a Predefined Colour
Change the colour of the white pixels of an image to yellow on the image 'duckMallardDrake.jpg':
   % Color the duck yellow!
   im= imread('duckMallardDrake.jpg');
   imshow(im);
   [nr,nc,np]= size(im);
   newIm= zeros(nr,nc,np);
   newIm= uint8(newIm);
   for r= 1:nr
       for c= 1:nc
if ( im(r,c,1)>180 && im(r,c,2)>180 && im(r,c,3)>180 ) % white feather of the duck; now change it to yellow newIm(r,c,1)= 225;
newIm(r,c,2)= 225;
               newIm(r,c,3)= 0;
           else  % the rest of the picture; no change
               for p= 1:np
                   newIm(r,c,p)= im(r,c,p);
end end
end end
   figure
   imshow(newIm)
 7

Another example on finding an area of a predefined colour. Find the pixels indexes with the yellow colour on the image ‘Two_colour.jpg’.
im = imread('Two_colour.jpg'); % read the image imshow(im);
% extract RGB channels separatelly
red_channel = im(:, :, 1); green_channel = im(:, :, 2); blue_channel = im(:, :, 3);
% label pixels of yellow colour
yellow_map = green_channel > 150 & red_channel > 150 & blue_channel < 50;
% extract pixels indexes
[i_yellow, j_yellow] = find(yellow_map > 0);
Visualise the results. Note that plot and scatter commands work with spatial coordinates.
% visualise the results
figure;
imshow(im); % plot the image
hold on;
scatter(j_yellow, i_yellow, 5, 'filled') % highlighted the yellow pixels
Conversion between Different Formats
1. Selectyourownimage.
2. Readacolourimage(imreadcommand).ConvertyourRGBcolourimagetogreyand
then to HSV format (rgb2gray and rgb2hsv commands, respectively).
3. Convert your RGB image into a binary format (im2bw command) and visualise the
result. Use at least 3 more operations converting images from one format to another. This part is not required for the report, as mentioned in the assessment criteria section.
The conversion to a binary image is called binarisation. Binarisation is based on a applying a threshold on the image intensity and the process is called thresholding. The output binary image has values of 0 for black for all pixels in the input image with luminance less than the threshold level and 1 (white) for all other pixels.
Understanding Image Histogram
1. Experiment with a grey scale image, calculate the histogram and visualise it. There are various ways to plot an image histogram:
1. imhist, 2. bar 3. stem 4. plot.
Show results with them. What could you say about the dominating colours of the objects/ images from the histograms?
Example Code:
clear all
I = imread('image.jpg');
Im_grey = rgb2gray(I);
figure, imhist(Im_grey);
xlabel('Number of bins (256 by default for a greyscale image)') ylabel('Histogram counts')
You can use the bar function to plot the image histogram, in the following way:
  8

    h = imhist(Im_grey);
    h1 = h(1:10:256);
    horz = 1:10:256;
    figure, bar(horz,h1)
See the difference compared with what plot() function will give you: figure, plot(h)
2. Calculate and visualise the histogram of an RGB image
In MATLAB you can only use the built in ‘hist’ on one channel at a time. One way to display the histogram of an image is to convert it into a grayscale format with rgb2gray and apply the imhist function. Another approach is to work with the RGB image in the following way. First, we convert the image into double format and we can calculate for each channel:
    r= double(I(:,:,1));
    g = double(I(:,:,2));
    b = double(I(:,:,3));
    figure, hist(r(:),124)
    title('Histogram of the red colour')
    figure, hist(g(:),124)
    title('Histogram of the green colour')
    figure, hist(b(:),124)
    title('Histogram of the blue colour')
Now repeat again the binarisation process after you choose the threshold value appropriately, based on the histogram that you observe. This threshold value must be normalised on the range [0, 1] to be used with the function im2bw.
Example: If we choose the median value 128 of the full range [0, 255] as the threshold, then you can perform binarisation of image Im with the function.
    ImBinary=im2bw(I,128/255);
Vary the threshold and comment on the results.
3. Calculate and visualise the histogram of an HSV image
For an HSV histogram you can use the same recommendation as for an RGB histogram, given above. Another way of calculating the histogram of in the HSV space is given below.
    % Display the original image.
    subplot(2, 4, 1);
    imshow(rgbImage, [ ]);
    title('Original RGB image');
    % Convert to HSV color space
    hsvimage = rgb2hsv(rgbImage);
    % Extract out the individual channels.
    hueImage = hsvimage(:,:,1);
    satImage = hsvimage(:,:,2);
    valueImage = hsvimage(:,:,3);
    % Display the individual channels.
    subplot(2, 4, 2);
    imshow(hueImage, [ ]);
9

   title('Hue Image');
   subplot(2, 4, 3);
   imshow(satImage, [ ]);
   title('Saturation Image');
   subplot(2, 4, 4);
   imshow(valueImage, [ ]);
   title('Value Image');
   % Take histograms
[hCount, hValues] = imhist(hueImage(:), 18); [sCount, sValues] = imhist(satImage(:), 3); [vCount, vValues] = imhist(valueImage(:), 3);
   % Plot histograms.
   subplot(2, 4, 5);
   bar(hValues, hCount);
   title('Hue Histogram');
   subplot(2, 4, 6);
   bar(sValues, sCount);
   title('Saturation Histogram');
   subplot(2, 4, 7);
   bar(vValues, vCount);
   title('Value Histogram');
   % Alert user that we're done.
message = sprintf('Done processing this image.n Maximize and check out the figure window.');
msgbox(message);
Include the results of understanding the RGB image histogram in your report.
Understanding image histogram – the difference between one-colour and two-colour images
An image histogram is a good tool for image understanding. For example, image histograms can be used to distinguish a one-colour image (or an object in the image) from a two-colour image (or an object in the image):
1. Read‘One_colour.jpg’and‘Two_colour.jpg’(withimread);
2. Convert both images into the greyscale format (with rgb2gray);
3. Calculate and visualise the histograms for both images (with imhist);
What is the differences between these colour histograms? What do we learn from the visualised red, green and blue components of the image histogram?
  10

 Lab Session 1 - Part II: Edge Detection and Segmentation
 of Static Objects
In this practical session, you will continue to study basic image processing techniques. You will enhance the contrast of images and perform different operations on them. You will learn how to model different types of noise in images and how to remove the noise from an image. You will also learn approaches for edge detection and static objects segmentation.
Guidance on Performing Lab Session 1 – Part II
1. Readapreliminarychosenimage‘Image.gif’(withimread);
Enhancement Contrast
2. Compute an image histogram for the image (imhist). Visualise the results. Analysing the histogram think about the best way of enhancement the image, recall the methods from the lectures;
3. Applythehistogramequalisationoperationtotheimage(histeq).Visualisetheresults. Compute an image histogram for the corrected image. Visualise the results. Compare it with the original histogram. Does this method of enhancement actually enhance image quality?
4. Apply the gamma correction of the histogram to the image (imadjust). Visualise the results. Experiment with different values for gamma and find the optimal one. Compute the image histogram to the corrected image. Visualise the results. Compare the histogram and the image with the original ones and the results of the histogram equalisation. Which method of enhancement performs better?
Images with Different Types of Noise and Image Denoising
5. Synthesise two images from the image ‘Image.gif’ with two types of noise – Gaussian and “salt and pepper” (imnoise). Visualise the results;
6. Apply the Gaussian filter to the Gaussian noised image (imgaussfilt). Find the optimal filter parameters values. Visualise the results;
7. Apply the Gaussian filter to the salt and pepper noisy image (imgaussfilt), visualise and discuss the results.
8. Apply the median filter to the salt and pepper noised image (medfilt2). Find the optimal filter parameter values. Visualise the results;
Static Objects Segmentation by Edge Detection
9. Find edges on the image ‘Image.gif’ with the Sobel operator (edge(..., ‘sobel’, ...)). Vary the threshold parameter value and draw conclusions about its influence over the quality of the segmented image. Visualise the results with the optimal threshold value;
10.Repeat the step 9 with the Canny operator (edge(..., ‘canny’, ...)); 11.Repeat the step 9 with the Prewitt operator (edge(..., ‘prewitt’, ...));
Include the resulting images with segmented objects and add conclusions about static objects segmentation using edge detection methods (from steps 9-11) in your report.
   11

 Lab Session 2: Object Motion Detection & Tracking
This lab session is focused on motion detection and tracking in video sequences. You will apply the optical flow algorithm to object tracking by using corner points. The optical flow calculates the motion of image pixels from one frame to another.
You will apply the optical flow algorithm to the “interesting” corner points only since the numerical stability of the algorithm is guaranteed in these points only.
You need to find first the “interesting” points, and then apply an optical flow algorithm only to them.
Background Knowledge
Corner Points
In many applications of image and video processing it is easier to work with “features” (“characteristic points” or “local feature points”) rather than with all pixels of a frame. These “features” or “points” should differ from their neighbours in some area.
Corner points are an example of such features. A corner point is a point whose surrounding points differ from the surroundings of its neighbours. Figure 2.1 shows an example of three types of points: 1) a top corner point, 2) an edge point and 3) a point inside the object (internal point).
● The corner point is surrounded with the solid line square and its neighbour point is surrounded by the dotted square. The corner point and its neighbour point have different surrounding areas.
● For the edge point its surrounding is the same as the surroundings of its neighbour point in one direction and it is different in any other direction.
● The internal point is surrounded by the same neighbourhood as all other near points around it.
Figure 2.1. Illustration of the difference between corner, edge and internal points of an object. Please note that the analysed points are surrounded with a square and the dotted square indicates the area around neighbour points.
One of the most popular methods for detecting corner points is the Harris corner detector. It is used by default in the MATLAB function corner.
The Optical Flow Approach
An optical flow can be represented as a vector field of apparent pixel motion between frames. Optical flow estimation is one of the widely methods for motion detection in robotics and computer vision. Given two images I1 and I2, optical flow estimation algorithms can find the vector field:
  12

where [N, M] is the image size. The vector field contains displacement vectors for each pixel. Pixel (x, y) from the image I1 will have location (x+ui, j,y + vi, j) in the image I2.
There are many different methods for optical flow estimation. The Lucas-Kanade algorithm is one of the most popular algorithms. This lab considers only the Lucas-Kanade algorithm. It has the following assumptions:
1. Brightness (colour) consistency. It means that pixels do not change their colour between frames.
2. Spatial similarity. It means that neighbours of each pixel have similar motion vectors.
3. Small displacement. This means that the displacement or motion vectors are small and a Taylor series expansion can be applied.
With these assumptions in place, the calculation of the optical flow reduces to solving an overdetermined linear system. This is done by the Least Square method. The conditions of the overdetermined linear system solution, lead to the Lucas-Kanade algorithm. You will apply the Lucas-Kanade algorithm to the “interesting” (“feature”) points only.
Tracking with the optical flow
Object tracking is the process of object localisation and association of its location on the current frame with the previous ones, building a trajectory for each object.
Optical flow estimation algorithms provide a tool to calculate a displacement vector from one frame to another. This information can be used for tracking purposes. Indeed, if we determine the point of interest in the first frame, we can compute a displacement vector for it for every successive frame, using an optical flow estimation algorithm. The combination of the positions of the points, computed by displacement vectors constitutes the trajectory of this point.
If we want to track a non-point object, we can find “interesting” points on the object, track them and use a median position of the “interesting” points as a position for the object. Since optical flow estimation algorithms are not perfect and can lose tracking points, one should reinitialise “interesting” points from time to time. At any time instant, the introduced “interesting” points should satisfy the following constrains:
● A point should not be far from the current median position of the object – it has to be inside the current bounding box;
● A point should be on the object – in your task you will use colour for this constraint;
● Each pair of tracking points has to differ from each other – if two points are too close
to each other, one of them will be deleted.
As the result, we have the following algorithm:
1. Build a colour template of the object in the first frame.
2. If necessary (in your object detection task) read the next frame.
3. Detect “interesting” points of the object in the current frame. Make sure they are
satisfying all the constraints, mentioned above.
4. Initialise tracks with detected and filtered “interesting” points.
5. Compute an optical flow for every “interesting” point between successive frames
6. Computenewpositionsofthetracksbyaddingtheopticalflowvectorstothecurrent
positions in the tracks.
7. Make sure the new positions of the tracks satisfy the second and third constraints,
mentioned above. If not, delete those tracks.
13

8. Computethemedianpositionofthenewpositionsofthetracks.Movethebounding box to the new median position.
9. Makesurethenewpositionsofthetracksareinsidetheboundingbox.Ifnot,delete those tracks.
10.Repeat steps 5-9. Introduce the new “interesting” points of the object in every k frames.
It is recommended to use k = 5.
Optical Flow Estimation and Visualisation with MATLAB
From MATLAB there is an optical flow object for optical flow estimation – opticalFlowLK (http://uk.mathworks.com/help/vision/ref/opticalflowlk-class.html)
To estimate an optical flow you will use the command estimateFlow (http://uk.mathworks.com/help/vision/ref/opticalflowlk.estimateflow.html).
videoReader = VideoReader('...');
frameRGB = readFrame(videoReader);
frameGrey = rgb2gray(frameRGB);
opticFlow = opticalFlowLK('NoiseThreshold',0.009); flow = estimateFlow(opticFlow,frameGrey);
You can use the following fields of the flow object:
● flow.Vx – the horizontal component of the velocity. size(flow.Vx) ==
size(frameGrey). flow.Vx(i, j) is the horizontal component of the velocity of the pixel
(i, j).
● flow.Vy – the vertical component of the velocity. size(flow.Vy) == size(frameGrey).
flow.Vy(i, j) is the vertical component of the velocity of the pixel (i, j).
You need the Computer Vision System toolbox from MATLAB.
For visualisation of the optical flow there are several options:
1. withthecommandplot
(http://uk.mathworks.com/help/vision/ref/opticalflow.plot.html)
2. with the command quiver(u, -v, 0), where u, v are the horizontal and vertical
displacements, respectively. Note, that it may take some time to visualise the results on your Figure.
*Moving a bounding box to a new position – help for the provided
function
In the object tracking task you could move a bounding box around an object to a new position between frames. The function ShiftBbox could help perform this task.
The function ShiftBbox has two input arguments:
● input_bbox – the current bounding box in the format: input_bbox is a 1 x 4 vector
The. input_bbox(1:2) are the spatial coordinates of the left top corner of the bounding box, input_bbox(3) is the horizontal size of the bounding box, input_bbox(4) is the vertical size of the bounding box;
● new_center – the new position of the centre of the bounding box in spatial coordinates
The function ShiftBbox has one output:
   14

● shifted_bbox – the updated bounding box in the same format as the input_bbox argument. The centre of the updated bounding box is equal to the new_center input parameter
Guidance for Performing Lab Session on Optical Flow
1. You can find corner points (with the corner MATLAB function) on the images ‘red_square_static.jpg’ and ‘GingerBreadMan_first.jpg’. Note that the corner function works with greyscale images. You need to convert first the input images to the greyscale format. Next, you can apply the function with different maximum number of corners. Include the resulting images in your report. You need to show the results only with one corners value.
2. Findopticalflowofthepixelswhichmovedfromtheimage ‘GingerBreadMan_first.jpg’ to the image ‘GingerBreadMan_second.jpg’ (opticalFlowLK, estimateFlow). Note that the estimateFlow function works with greyscale images. You need to convert the input images to greyscale format. Include the visualisation of the calculated optical flow by any of the provided methods in your report.
3. Performtrackingofasinglepointusingtheopticalflowalgorithminthevideo ‘red_square_video.mp4’:
a. Create a video reader object to read the ‘red_square_video.mp4’ video (VideoReader);
b. Createanopticalflowobject(opticalFlowLK);
c. Read the first frame (readFrame);
d. Findlefttoppointoftheredsquareonthefirstframe(manually,youcanuse
corner command to help);
e. Addpositionofthispointasthefirstpositioninthetrack;
f. Run the function estimateFlow with the first frame to initialise the optical
flow object;
g. Readthenextframe(readFrame);
h. We know that Lucas-Kanade optical flow estimation works well only for
“interesting” points. The estimateFlow function works with the current frame in comparison with the previous one. It means that we should use the “interesting” point from the current frame and not the point from the previous frame, which you detected in step c. This is the reason why we should find the nearest corner point for the position of the point of interest from the frame 1 to calculate an optical flow for it.
Find corner points (corner) in frame 2;
i. Find the nearest corner point to your first position from the track;
j. Compute an optical flow (with the estimateFlow command) for this point
(between frames 1 and 2);
k. Compute a new position of the point by adding the found velocity vector to
the current position:
l.
x_new = corner_x + flow.Vx(round(corner_y), round(corner_x)); y_new = corner_y + flow.Vy(round(corner_y), round(corner_x));
where corner_x and corner_y denote the coordinates of the nearest corner, flow is the optical flow object, the output of the estimateFlow function;
m. Add the new position of the point as the second position in the track; n. Readthenextframe(readFrame);
    15

o. As optical flow estimation is not perfect, your new point can differ from the actual corner. We also know that the Lucas-Kanade optical flow estimation algorithm works well only for “interesting” points. Hence, we should find the nearest corner point for our estimated position of the point of interest and calculate an optical flow for it.
Find corner points (with the corner function) in frame 3;
p. Find the nearest corner point to your second position from the track;
q. Compute an optical flow ( with estimateFlow) for this nearest point (between
frames 2 and 3);
r. Compute a new position of the point by adding the found velocity vector to
the current position;
s. Add the new position of the corner as the third position in the track;
t. Read the next frame (readFrame);
u. Find corner points (corner) in frame 4;
v. Find nearest corner point to your third position from the track;
w. Compute an optical flow (estimateFlow) for this nearest point (between
frames 3 and 4) and so on.
Visualise the track on the last frame of the video. Plot on the same figure the estimated trajectory and the ground truth trajectory (it is available in the file red_square_gt.mat, which contains the correct trajectory of the left top point of the red square in the variable gt_track_spatial. Note that the ground truth trajectory is given in the spatial coordinate frame). The file new_red_square_gt.mat contains the groundtruth in a way to match the dimensions. Note that the ground truth data has one extra point at the end. You do not need to use it. When you save your results from the optical flow, you do not need to keep the first point, which comes from the algorithm initialisation. Then, with these two changes in mind, you can calculate the root mean square error of the estimated trajectory with respect to the ground truth over all video frames. Compute the error for each point of the trajectory. In your report, write the equation that you used to calculate the root mean square error.
Plot the results. Include the plot in your report. Please use the zoom-in functionality of the MATLAB figure in order to visualise well the estimation errors. Make the conclusion about the accuracy of the method.
   16

 Lab Session 3: Background Subtraction
This lab session is focused on video processing, in particular on background subtraction methods for automatic object detection from static cameras. You will learn how the basic frame differencing algorithm for object detection works in a sequence of video frames, as provided by an optical video camera. You will compare the results with the Gaussian mixture model for background subtraction.
For this lab session you will need to have the Computer Vision System Toolbox in MATLAB.
Background Knowledge
Background Subtraction
As the name suggests, background subtraction is the process of separating out foreground objects (the moving objects) from the background (the static environment) in a sequence of video frames. The process can be performed off-line, but more commonly is needed in real time. Background subtraction is used in many emerging video applications, such as video surveillance, traffic monitoring, and gesture recognition for human-machine interfaces, to name a few.
Frame Differencing Approach
The frame differencing approach is the simplest form of background subtraction. It usually works on the video frames, after converting them from colour to greyscale format. Hence, the first thing to do is to convert the video frames arriving from the camera in RGB or HSV format to a greyscale format. Next, the current grey scale frame is simply subtracted from the previous frame, and if the difference in pixel intensity values for a given pixel is greater than a pre-specified threshold Ts, the pixel is considered as being a part of the foreground
The algorithm steps are listed below:
1. Convert the incoming frame 'fr' to greyscale (here we assume a color RGB sensor)
2. Subtract the current frame from the background model 'bg_bw' (in this case it is just
the previous frame)
3. For each pixel, if the difference between the current frame and backround 'fr_diff
(j,k)' is greater than a threshold 'thresh', the pixel is considered part of the foreground.
Gaussian Mixture Model Approach
In the Gaussian mixture model approach one builds the model of a background. It is assumed that the intensity of the background pixels is changeable. The distribution of the intensity is modelled as the mixture of Gaussian distributions. All the components of the mixture are scored based on the component weight in the mixture and its variance (respectively standard deviation). The components with the bigger scores are labelled as background, and others are labelled as foreground.
This approach is implemented in MATLAB, in the Computer Vision System Toolbox. You can use the vision.ForegroundDetector function to perform foreground detection by the Gaussian mixture model approach.
Guidance for Performing Lab Session on Background Subtraction
Video Reading/ Writing
1. Createtheobjecttoreadthevideo‘car_tracking.mp4’:
source = VideoReader('car-tracking.mp4');
Note that VideoReader supports the following file extensions: “.avi”, “.mj2”, “.mp4” or “.m4v” and others;
  17

2. Createtheobjecttowritethevideointothefile‘results.mp4’:
output = VideoWriter('results.mp4', 'MPEG-4');
Note that VideoWriter supports the following file extensions: “.avi”, “.mj2”, “.mp4” or
“.m4v”;
3. Open the writer object in order to have an opportunity to write anything to the file:
    open(output);
4. Readaframefromtheinputfile:
    frame = readFrame(source);
5. Visualisetheframe:
    imshow(frame);
6. Writetheframetotheoutputfile:
    writeVideo(output, frame);
7. Make a loop to read and write frames. In order to read all frames you can check whether the reader object still has frames:
    while hasFrame(source)
        frame = readFrame(source);
        imshow(frame);
        writeVideo(output, frame);
end
8. To finalise the output video close the video writer object:
    close(output);
Frame Differencing Algorithm for Background Subtraction
One possible solution includes the following steps:
9. Open the script “Frame_difference.m”. You need to vary the threshold parameter in
the frame differencing algorithm for background subtraction and draw conclusions.
The following steps explain the commands in the script;
10.Create and open a video reader object to read the video ‘car-tracking.mp4’ (with
VideoReader);
11.Set the threshold parameter value. Vary the threshold and comment on its
influence over the detection process. Include your conclusions in your report, take
snapshot video frames and show them in your report.
12.Read the first frame as a background (with readFrame) and convert it to the
greyscale format (rgb2gray); 13.Write a loop on frames:
a. Readanewframe(readFrame);
b. Convert the new frame to the greyscale format (rgb2gray);
c. Compute the difference between the current frame and the background
frame;
d. Create the frame with foreground mask, where the difference is bigger than
the threshold the output pixel is white, otherwise is black;
e. Updatethebackgroundframewiththecurrentone;
f. Visualise the results;
g. Writetheforegroundframetotheoutputvideo;
14.Close the video writer object
Gaussian Mixture Model Algorithm for Background Subtraction
15.Open the script “Gaussian_mixture_models.m”. You need to vary the number of initial frames to learn the background model and the number of Gaussain components in the mixture of the Gaussian mixture model algorithm for background subtraction, the other parameters of the function and draw conclusions. The script is very similar to the previous one, so we highlight only differences:
  18

a. You will use the foreground detector object from the Computer Vision System toolbox in MATLAB – vision.ForegroundDetector();
b. Youneedtovarythreeparameters:
i. The number of initial frames for training a background model;
ii. The number of Gaussians in the mixture;
iii. The threshold for decision making.
c. To apply the foreground detector to a new frame, use the step function. It returns the foreground mask of the frame in the logical format.
Comment in your report on how the parameters influence the detection performance. Take snapshot video frames and show them in your report in order to support your conclusions.
  Frame_difference.m
clear all
close all
% read the video
MATLAB scripts
source = VideoReader('car-tracking.mp4');
% create and open the object to write the results
output = VideoWriter('frame_difference_output.mp4', 'MPEG-4'); open(output);
thresh = 25;      % A parameter to vary
% read the first frame of the video as a background model
bg = readFrame(source);
bg_bw = rgb2gray(bg);
% --------------------- process
-
% loop all the frames
while hasFrame(source)
    fr = readFrame(source);
    fr_bw = rgb2gray(fr);
    fr_diff = abs(double(fr_bw)
% convert background to greyscale
frames ----------------------------------
% read in frame
% convert frame to greyscale
- double(bg_bw));  % cast operands as
double to avoid negative overflow
    % if fr_diff > thresh pixel in foreground
    fg = uint8(zeros(size(bg_bw)));
    fg(fr_diff > thresh) = 255;
    % update the background model
    bg_bw = fr_bw;
    % visualise the results
    figure(1),subplot(3,1,1), imshow(fr)
    subplot(3,1,2), imshow(fr_bw)
    subplot(3,1,3), imshow(fg)
    drawnow
end
writeVideo(output, fg);
% save frame into the output video
19

close(output); % save video
Gaussian_mixture_models.m
clear all close all
% read the video
source = VideoReader('car-tracking.mp4');
% create and open the object to write the results
output = VideoWriter('gmm_output.mp4', 'MPEG-4'); open(output);
% create foreground detector object
n_frames = 10; % a parameter to vary
n_gaussians = 3; % a parameter to vary
detector = vision.ForegroundDetector('NumTrainingFrames', n_frames, 'NumGaussians', n_gaussians);
% --------------------- process frames ---------------------------------- -
% loop all the frames
while hasFrame(source)
    fr = readFrame(source);
    fgMask = step(detector, fr);
Gaussian mixture models
% read in frame
    % compute the foreground mask by
% create frame with foreground detection
fg = uint8(zeros(size(fr, 1), size(fr, 2)));
fg(fgMask) = 255;
% visualise the results
figure(1),subplot(2,1,1), imshow(fr)
subplot(2,1,2), imshow(fg)
drawnow
    writeVideo(output, fg);
close(output); % save video
% save frame into the output video
end
20

 Lab 4: Robot Treasure Hunting – Towards Autonomous Decision
 A robot is given a task to search and find “treasures” in three tasks: easy, medium and difficult The starting point of the robot search is where the red arrow is.
In this practical session you will apply image processing techniques to perform autonomous robot decision making from images. The task focuses to the development of a search algorithm in images with a single treasure or multiple treasures. One possible solution is with the arrows in the images and other features available in these images until reach one of the objects. You are provided with a starting script which needs completion and several functions to be written.
Background Knowledge
Image Plane
In MATLAB there are several types of coordinate systems for images, we will focus on two of them: pixel and spatial frames.
Pixel Coordinates
When you read an image with imread command you get a 3D matrix (for an RGB image) or 2D matrix (for a grey or binary image). These are shown on Figures 4.1 and 4.2, respectively.
Figure 4.1 Figure 4.2
You can use the matrix elements (row and columns) to access pixel values. For example,
to get an intensity level of the highlighting pixel on the right figure you can use:
im = imread('image'),
im(2, 3)
Spatial Coordinates
In a spatial coordinate system, locations in an image are positions on a plane and they are described as x and y coordinates (rather than by rows and columns as before). Figure 4.3 shows the spatial coordinate system for an image. Note that the centre of the pixel in the 2nd row and 3rd column (marked as *) has the spatial coordinates: x = 3, y = 2. For example, to plot this mark you can use the plot function which works with spatial
coordinates):
im = imread('image')
  Figure 1. Pixel coordinate system for an RGB image
Making with Computer Vision
  Figure 2. Pixel coordinate system for a grey scale image
21

imshow(im);
hold on
plot(3, 2, '*black')
hold off
 Figure 4.3. Spatial coordinate system for an image
Image Binarisation
Often, in image analysis it is easier to work with binary images rather than with grey scale images or colour ones. In these cases usually white pixels (with label 1) correspond to the objects of interest and black pixels (with label 0) correspond to background. For image binarisation it is important to tune the threshold parameter. If the threshold value is too big, some objects or parts of objects can be lost, if the threshold is too low some parts of background can be labelled with 1. In MATLAB the command im2bw performs the conversion of an image to a binary format, the operation which we call image binarisation.
Connected Component Analysis
Once you distinguish the objects of interest from the background (for example, by image binarisation), you may want to distinguish each object. One way to distinguish objects, if there is no occlusion between them, is to compute connected components of the binary image. The idea is to add the same label to pixels that are connected (based on an image feature or other criterion). Pixels that are not connected with the current region will be assigned a different label. Two pixels are called “connected” if it is possible to build a path from one pixel to another, using only foreground pixels. By foreground we mean the object of interest and by background we denote the environment. Two successive pixels in the path must be neighbours. The neighbour areas can be different. For example, you can see on Figure 4.4 the 4-connected area (on the left) and the 8-connected area (on the right) where the red pixel is the current one, the blue ones are neighbours.
 Figure 4.4. Examples of neighbourhoods used in connected component analysis. A 4-connected area is shown on the left hand side, and an 8-connected area on the right hand side.
22

A possible result of labelling of connected components is presented on Figure 4.5. The image on the left shows the input binary image. The matrix on the right represents the labelled output image.
Figure 4.5. The results of connected component analysis of a binary image. The input image is on the left hand side, the labelled output is on the right hand side.
In MATLAB, the command bwlabel performs connected component analysis of an input binary image and returns labelled output (Figure 5, right). The label2rgb command is useful for the visualisation of the results.
Performing Lab Sessions 4: Robot Treasure Hunting
In this lab session you will need to extend the provided MATLAB script and finalise it. You are already given most of the functions for performing the robot treasure hunting tasks. The parts, which are missing and which you will complete by yourself, are highlighted by the red colour.
1. Open the script “treasure_hunting.m”. In this script you have the main necessary commands, except for the highlighted with blue colour. Make sure you have the input images: “Treasure_simple.jpg”, “Treasure_medium.jpg” and “Treasure_hard.jpg”.
2. Convert all given images in binary format, using im2bw. You should find the appropriate threshold value, so that the binarisation operation applies to all objects from the input images. Include the results of the binarisation of one of the images and the value of the threshold value in your report.
3. Find the connected components of your obtained binary images (with bwlabel ) and visualise (with label2rgb) the detected connected components.You have this functionality in the provided script. You do not need to include this visualisation in your report.
4. Compute different characteristics of the connected components (with regionprops). Visualise the bounding boxes of the objects (BoundingBox field from the regionprops function output, rectangle(‘Position’, ...)). You have this functionality in the provided script. It is not required to include this visualisation in your report.
5. Develop a function to distinguish arrows from other objects. What does differ the arrows from the other objects? You may use any ideas from lectures, the previous lab session or common sense for your function. Include the brief description of main idea of your function in your report and the actual code of the function in the appendix of your
report. Hint: arrows have points of different colour.
6. Findthestartingredarrow.Youhavethisfunctionalityintheprovidedscript.
       23

7. Develop a function to find the label of the next nearest object to which the current arrow indicates.
Hint 1: to set a line it is enough to have two points.
Hint 2: for each arrow you may extract the centroid point and the yellow one.
Hint 3: a vector (x2 – x1, y2 – y1) points to the direction from the point (x1, y1) to the point
(x2, y2).
8. Apply your functions to find the treasure in the images. Visualise the treasure and the path to it from the starting arrow. You have this functionality in the provided script. Include your visualisation in your report for all images.
9. Other solutions of this task are possible. If you propose a different solution, include a brief description of it and provide a diagram of the main idea of your solution. Include your functions in your report.
Additional Guidance on Performing the Robot Treasure Hunting Task
1. Areas of the arrows are different from the treasures.
2. All arrows have a yellow dot inside, while treasures do not contain any yellow pixel.
You may have other ideas, you may not necessarily use the hints. Include the brief description of the main idea of your function in your report. Please include the code of the function in an Appendix of your report.
In order to find the next object, you could consider these suggestions:
1. Foreacharrowyoumayextractthecentroidpointandthecentroidoftheyellowdot.
2. To set a line, it is enough to have two points.
3. Avector(x2-x1,y2-y1)pointsthedirectionfromthepoint(x1,y1)tothepoint(x2,y2). After extending the vector, you may find the vector enter the bounding box of another arrow or the treasure. Therefore, that arrow is the next object.
             The final result can look, for example, like that:
24

 close all;
clear all;
%% Extracting connected components
con_com = bwlabel(bin_im);
imshow(label2rgb(con_com));
%% Computing objects properties
props = regionprops(con_com);
%% Drawing bounding boxes
n_objects = numel(props);
imshow(im);
hold on;
for object_id = 1 : n_objects
pause;
MATLAB script which needs to be completed
%% Reading image
im = imread('Treasure_simple.jpg'); % change name to process other images imshow(im); pause;
%% Binarisation
bin_threshold = 0; % parameter to vary
bin_im = im2bw(im, bin_threshold);
imshow(bin_im);  pause;
rectangle('Position', props(object_id).BoundingBox, 'EdgeColor', 'b'); end
hold off;  pause;
%% Arrow/non-arrow determination
% You should develop a function arrow_finder, which returns the IDs of the arror objects.
% IDs are from the connected component analysis order. You may use any parameters for your function.
arrow_ind = arrow_finder();
%% Finding red arrow
n_arrows = numel(arrow_ind);
start_arrow_id = 0;
% check each arrow until find the red one
for arrow_num = 1 : n_arrows
% determine the arrow id % extract colour of the centroid point of the current arrow
object_id = arrow_ind(arrow_num);
25

centroid_colour = im(round(props(object_id).Centroid(2)), round(props(object_id).Centroid(1)), :);
if centroid_colour(:, :, 1) > 240 && centroid_colour(:, :, 2) < 10 && centroid_colour(:, :, 3) < 10
% the centroid point is red, memorise its id and break the loop
        start_arrow_id = object_id;
break; end
end
%% Hunting
cur_object = start_arrow_id; % start from the red arrow path = cur_object;
% while the current object is an arrow, continue to search
while ismember(cur_object, arrow_ind)
% You should develop a function next_object_finder, which returns % the ID of the nearest object, which is pointed at by the current % arrow. You may use any other parameters for your function.
cur_object = next_object_finder(cur_object);
    path(end + 1) = cur_object;
end
%% visualisation of the path
imshow(im);
hold on;
for path_element = 1 : numel(path) - 1
object_id = path(path_element); % determine the object id rectangle('Position', props(object_id).BoundingBox, 'EdgeColor', 'y');
str = num2str(path_element);
text(props(object_id).BoundingBox(1), props(object_id).BoundingBox(2), str,
'Color', 'r', 'FontWeight', 'bold', 'FontSize', 14); end
% visualisation of the treasure
treasure_id = path(end);
rectangle('Position', props(treasure_id).BoundingBox, 'EdgeColor', 'g');
This session is about Convolutional Neural Networks (CNNs) and how to perform image classification tasks with them. You will learn how to design train, and evaluate a CNN using the Deep Learning Toolbox in MATLAB.
Please keep in mind that deep learning tasks such as training a Convolutional Neural Network (CNN) can be computationally intensive. Considering computational constraints, the choice is for LeNet-5 - a relatively small but robust neural network architecture. Its small size should make it feasible to train the model even on a regular laptop without the necessity for a high-end GPU.
In this case you will implement a Convolutional Neural Network (CNN) called LeNet-5 and learn about the principle of operation of CNNs [Lecun et al., 1998]. You will solve an image classification task on the data set named Digits [Digit Data, 2023]. You need to run the code provided below and assess the performance of the CNN result. The next subtask includes
 Lab Session 5: Convolutional Neural Networks for Image
 Classification
26

improving these results by performing some of the operations differently, e.g. with a different kernel size for the convolutional layers (3x3 and 5x5, stride 1 and 2), different pooling operation (max pooling and with average pooling), with and without dropout operations. Train the CNN with a different number of operations, a different learning rate, training data region and discuss the results. This task requires varying the training parameters from the trainingOptions function.
Background Knowledge
I. Convolutional Neural Networks
Convolutional Neural Networks (CNNs) [Alzubaidi et al., 2021] are a class of deep, feed- forward artificial neural networks that have been successful in various visual recognition tasks. CNNs are particularly effective in applications involving spatial data, where the proximity of features within the input space is important for the task, such as image and video recognition.
Figure 5.1 below shows the architecture of LeNet-5 [Lecun et al., 1998], which was primarily designed for handwriting and character recognition. LeNet-5, being one of the earliest convolutional neural networks, has played a significant role in the development of deep learning, particularly in the field of computer vision.
 Figure 5.1 Architecture of LeNet5 [Lecun et al., 1998]
Despite its relative simplicity compared to newer architectures, it can still serve as a useful
starting point for image recognition tasks.
II. Layers in a Convolutional Neural Network
A typical CNN consists of a sequence of layers, each transforming one volume of activations to another through a differentiable function. There are three main types of layers to build a CNN architecture: Convolutional Layer, Pooling Layer, and Fully-Connected Layer.
Convolutional Layer: The primary purpose of a Convolution operation in a CNN is to extract features from the input image. In this layer, several filters are convolved with the input image to compute a map of activations called feature maps. These feature maps learn to activate when they see some specific type of feature at some spatial position in the input.
Below, Figure 5.2 on the left shows the input image, in the middle the filter (mask) and on the right is the result from the convolution operation, i.e. the obtained feature map.
27

 Figure 5.2 Convolution operation on an image
The following Figure 5.3 shows the execution of the convolution operation:
Figure 5.3. The steps showing the convolution operation
Pooling Layer: After each convolutional layer, a pooling layer is often added for down- sampling features, which reduces the spatial size of the representation to reduce the amount of parameters and computation in the network. Pooling layer operates on each feature map independently to produce the output feature map. Max pooling is the most common type of pooling, where each output pixel contains the maximum value of a neighbourhood of the input and it is illustrated with Figure 5.4:
  Figure 5.4. Max pooling operation
Fully Connected Layer: The fully connected layer is a traditional Multi-Layer Perceptron function that uses a softmax activation function in the output layer. The term "Fully Connected" implies that every neuron in the previous layer is connected to every neuron on the next layer and it is illustrated below with Figure 5.5:
28

 Figure 5.5 Fully Connected Layer
The output from the convolutional and pooling layers represents high-level features of the input image. The purpose of the Fully Connected layer is to use these features for classifying the input image into various classes based on the training dataset.
III. Task 1a: Designing a Convolutional Neural Network
Designing a CNN involves several decisions. These include the choice and configuration of layers (convolutional, pooling, fully connected, normalization etc.), the size of filters in the convolutional layers, the stride (step size) for those filters, and how to handle the borders of the data. The design of a CNN is often determined based on the problem context and the size and complexity of the input data.
IV. Task 1b: Training a Convolutional Neural Network
Typically a CNN is trained via a supervised learning process with a labelled dataset. This involves feeding the network with labelled training data, where it makes predictions based on the current state of its weights and biases. In the core of the training is the backpropagation stage, which adjusts the weights and biases to minimise the network's error. The error is quantified by a loss function, which, in our case, is the cross-entropy loss [de Boer et al., 2005].
The cross-entropy loss function is important in many classification tasks as it measures the difference between the predicted probability distribution (output from the softmax layer) for each class and the true distribution, where the actual class has a probability of one. The loss is high when the predicted probability diverges significantly from the actual label, prompting the network to make substantial adjustments. Conversely, a low loss results from predictions close to the true labels, requiring less substantial changes. Through backpropagation, the network uses this loss to update its parameters, aiming to minimise this loss across all training examples throughout the epochs. The goal of training is to refine these parameters to reduce the loss function to a point where the network's predictions are as accurate as possible.
V. Task 2: Evaluating the Performance of a Convolutional Neural Network
Evaluating a CNN performance involves using a separate testing dataset to measure the network's ability to generalise from the training data to unseen data.
The data is split into trading and testing sets. By working over the whole testing data set,
please calculate the required metrics.
29

The evaluation metrics include accuracy (percentage of correct predictions), precision (percentage of true positive predictions), recall (percentage of actual positives that were predicted as positive), and the F1 score (a balance between precision and recall). More details on the evaluation metrics can be found here [Sokolova and Lapalme, 2009]:
      Actual: Yes
Actual: No
Predicted: Yes
True Positives (TP)
False Positives (FP)
Predicted: No
False Negatives (FN)
True Negatives (TN)
      The performance evaluation metrics can be calculated as follows [Annika et al., 2021, Sokolova and Lapalme, 2009]
● Accuracy = (TP+TN) / (TP+FP+FN+TN)
● Precision = TP / (TP+FP)
● Recall = TP / (TP+FN)
● F1 Score = 2*(Recall * Precision) / (Recall + Precision)
Designing and Training a CNN
1. Load the image dataset: load the Digits dataset from a predefined path in the MATLAB toolbox. The imageDatastore function handles the loading of image data. All subfolders are included and folder names are used as labels. The images are resized to 32x32 pixels for compatibility with the LeNet-5 architecture. The data is then split into training and validation sets, with 70% used for training and 30% for validation.
% Load the Digits dataset
digitDatasetPath = fullfile(toolboxdir('nnet'), 'nndemos', ... 'nndatasets', 'DigitDataset');
imds = imageDatastore(digitDatasetPath, ... 'IncludeSubfolders', true, 'LabelSource', 'foldernames');
imds.ReadFcn = @(loc)imresize(imread(loc), [32, 32]);
% Split the data into training and validation datasets
[imdsTrain, imdsValidation] = splitEachLabel(imds, 0.7, 'randomized');
Define the architecture of the CNN using the layers function: It begins with an input layer expecting images of size 28x28 with one channel (greyscale). Following this are alternating convolutional, ReLU activation, and max pooling layers. Then, there are several fully connected layers with ReLU activation, ending with a softmax layer and a classification output layer. Refer to the MATLAB documentation to understand how to define each of these layers. In Matlab, the loss function is implicitly defined through the classificationLayer. Refer to the MATLAB documentation to understand how to define each of these layers.
 Guidance for Performing Lab Session on Convolutional Neural
 Networks
30

% Define the LeNet-5 architecture
layers = [
imageInputLayer([32 32 1],'Name','input')
convolution2dLayer(5,6,'Padding','same','Name','conv_1') averagePooling2dLayer(2,'Stride',2,'Name','avgpool_1')
convolution2dLayer(5,16,'Padding','same','Name','conv_2') averagePooling2dLayer(2,'Stride',2,'Name','avgpool_2')
fullyConnectedLayer(120,'Name','fc_1') fullyConnectedLayer(84,'Name','fc_2') fullyConnectedLayer(10,'Name','fc_3') softmaxLayer('Name','softmax') classificationLayer('Name','output')];
2. Configure training options using the trainingOptions function. Set parameters like the optimisation algorithm, learning rate, batch size, and number of epochs.
In this block, you specify the training options. You use stochastic gradient descent with momentum (sgdm) as your optimization algorithm, set an initial learning rate of 0.0001, and limit the training to a maximum of 10 epochs. The training data will be shuffled at the start of each
epoch. The progress of training will be displayed as a plot.
% Specify the training options
options = trainingOptions('sgdm', ... 'InitialLearnRate',0.0001, ... 'MaxEpochs',10, ... 'Shuffle','every-epoch', ... 'ValidationData',imdsValidation, ... 'ValidationFrequency',30, ... 'Verbose',false, ... 'Plots','training-progress');
3. Train the CNN using the trainNetwork function, passing in the image data, the layer definitions, and the training options. Please keep in mind that it may require some time to train your model.
% Train the network
net = trainNetwork(imdsTrain,layers,options);
CNN Performance Evaluation
4. After the network is trained, it's used to classify the images in the validation dataset. The classify function is used to perform this classification. The accuracy of the network on the validation images is then calculated and printed. This is the proportion of images that were correctly classified by the network. It's calculated as the number of correctly classified images divided by the total number of images in the validation set.
% Classify validation images and compute accuracy
31

YPred = classify(net,imdsValidation);
YValidation = imdsValidation.Labels;
accuracy = sum(YPred == YValidation)/numel(YValidation); fprintf('Accuracy of the network on the validation images: %fn', accuracy);
VI. Task 3: Improving the Performance of the CNN Algorithm
Guidance on several methods to improve the image classification accuracy:
1. Regularisation Techniques: In your quest to enhance the CNN accuracy, a key strategy involves regularisation methods such as L1 and L2 norm regularisation. For understanding of L1 and L2 norm regularisation, please refer to [Tewari, 2021] and [Bilogour et al, 2023] from the Kaggle tutorial. The L1 and L2 norm regularisation methods are instrumental in overcoming overfitting. You can implement L1 and L2 regularisation in MATLAB by adjusting the WeightL1Factor and WeightL2Factor properties within the fullyConnectedLayer:
fullyConnectedLayer(120,'Name','fc_1', 'WeightL1Factor', 0.001, 'WeightL2Factor', 0.001)
Additionally, dropout is another valuable regularisation method for reducing overfitting in neural networks. It can be integrated using the dropoutLayer function, further helping in the model's robustness. For your report present results with two regularisation methods, e.g. with L1 or L2 norm regularisation or dropout or combinations of them.
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:COMP 315 代做、代寫 java 語言編程
  • 下一篇:&#160;ICT239 代做、代寫 java/c/c++程序
  • 無相關(guān)信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區(qū)
    昆明西山國家級風景名勝區(qū)
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網(wǎng)下載 歐冠直播 WPS下載

    關(guān)于我們 | 打賞支持 | 廣告服務(wù) | 聯(lián)系我們 | 網(wǎng)站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網(wǎng) 版權(quán)所有
    ICP備06013414號-3 公安備 42010502001045

    主站蜘蛛池模板: 亚洲一区毛片 | 国产精品一二三区 | 一区二区看片 | 最新中文字幕视频 | 成人短视频在线 | 久久av高潮av无av萌白 | 国产又粗又猛又黄又爽无遮挡 | 国产三级第一页 | 在线国产黄色 | 伊人久色| 亚洲精品亚洲人成人网 | 成人看片在线观看 | 自拍露脸高潮 | 中文字幕日韩三级 | 亚洲成人入口 | 久久毛片网站 | www.黄色在线观看 | 波多野结衣伦理 | 天天操天天操天天操 | 最近好看的2019中文在线一页 | 99久久国产免费 | 91九色视频 | 岛国裸体写真hd在线 | 成年人天堂 | 久久久国产精品亚洲一区 | 精品探花 | 在线an视频免费观看 | 最新网址av | 欧美精品久久久久久久 | 免费av观看 | 不卡欧美| 成人3d动漫在线观看 | 婷婷五月小说 | 岛国av在线播放 | 国产一区免费视频 | 亚洲黄色在线播放 | 激情视频免费在线观看 | 免费成人美女女 | 天天综合天天干 | 少妇激情偷人爽爽91嫩草 | 国产乱人伦 | 超碰人人艹 | 2024自拍偷拍 | 久久久性高潮 | 国产精品综合久久久久久 | 色婷婷精品国产一区二区三区 | 天堂精品一区二区三区 | 亚洲综合图片网 | 老色批影院 | 久久久久久久久黄色 | 日本色区 | 成人性生交大片免费 | 午夜一级在线 | 在线永久看片免费的视频 | 日韩乱码一区二区三区 | 青青草原综合久久大伊人精品 | 日本激情一区 | 国产精品一级二级三级 | 青青伊人国产 | 香蕉久久影院 | 国产色站 | av免费一区 | 成人免费一级伦理片在线播放 | 久久久一 | 久久夜夜操 | 久久春色 | 国产激情影院 | 天天干天天干天天干天天 | 日韩在线视频免费观看 | 丁香六月综合激情 | 亚洲精品午夜久久久久久久久久久 | 国产精品免费看 | 999久久久久久久久6666 | 欧美日批| 日韩av不卡在线观看 | 国产精品日本 | 亚洲天堂视频在线播放 | 综合色婷婷 | 亚洲免费看看 | 国产精品久久久网站 | 成人免费视频国产免费网站 | 人人澡人人爽 | 亚洲精品中文在线 | 天堂资源在线 | 欧美色妞网 | av天天在线 | 国产噜噜噜噜久久久久久久久 | 日本黄色大片网站 | 性感美女毛片 | 亚洲精品精 | 欧美综合影院 | 99热成人 | www日本视频 | 在线看片亚洲 | 欧美日韩性 | 亚洲中字在线 | 久久人人澡 | 日韩中文字幕第一页 | 日日干综合 | 在线观看视频 | 久久久精品天堂 | 国产模特av私拍大尺度 | 狠狠a| 99re视频在线观看 | 伊人网在线| 国产精品伦一区二区三级视频 | 国产午夜精品在线 | 亚洲性激情 | 成人免费网站www网站高清 | 青青草久 | 精品视频久久久久久久 | 中文字幕首页 | 欧美日韩a级 | a网站在线观看 | 国产精品久久久久久免费播放 | 欧美三级免费观看 | 日韩 欧美 | 91精品国产综合久久久蜜臀粉嫩 | 三级小视频在线观看 | av免费在线观看网址 | 伊人青青综合 | 91成人黄色 | 日韩天堂在线观看 | 久久综合加勒比 | 热久久亚洲 | 98国产精品 | 亚洲人成影视 | 午夜性福利视频 | 久久久国产精品视频 | 久久av导航 | www.777色| 毛片aaaa| 成人永久免费视频 | 亚洲成av人片在www色猫咪 | 国产精品污www在线观看 | 天堂网中文 | 亚洲精品视频一区二区三区 | 日本一区高清 | 成人午夜视频精品一区 | 国产黄色精品网站 | 看一级黄色大片 | 久久久精品国产99久久精品麻追 | 国产欧美二区 | 久久久久久久久网站 | 久久久国产成人一区二区三区 | 蜜桃av久久久亚洲精品 | 黄色一及片| 麻豆传媒一区二区三区 | 91成人影库| 久久手机视频 | 亚洲黄网站在线观看 | 三级一区二区 | 性囗交免费视频观看 | 黄色一级视频免费 | 男人插女人视频网站 | 国产另类专区 | 欧美另类亚洲 | 亚洲宅男天堂 | 国产一级特黄a高潮片 | 国外成人免费视频 | 女人的洗澡毛片毛多 | 成人午夜视频网站 | a级片免费在线观看 | 欧美高清视频一区二区三区 | 奇米影 | 亚洲综合1区 | 国产福利资源在线 | 99午夜| 婷婷狠狠操 | 亚洲深夜视频 | 国产日韩在线免费观看 | 射婷婷| 91精品国产麻豆 | 午夜污片 | 老司机成人网 | 在线αv| 久久久精品亚洲 | 国产精品精品 | 日日干夜夜骑 | 亚洲欧美中文日韩在线观看 | 日本精品在线视频 | 精品伦精品一区二区三区视频 | sm久久捆绑调教精品一区 | 国产精品女同久久久久 | 黄色91免费| 成人公开视频 | av性色| 多男调教一女折磨高潮高h www久久久com | 国产精品xxx | 成人欧美一区二区三区黑人冫 | 91精品国产综合久久蜜臀 | 国产在线视频二区 | 日韩欧美第一页 | 午夜av免费 | 亚洲免费成人在线 | 黑巨茎大战欧美白妞 | 国产免费av一区 | 国产又粗又猛又爽又黄的 | 日av在线播放 | 精品91久久久| 少妇一级淫片免费观看 | 在线国产日韩 | 中文字幕高清在线 | 在线不卡毛片 | 88xx成人永久免费观看 | 草啪啪 | 亚洲天堂网在线视频 | 不卡视频一区二区 | 国产剧情一区二区三区 | 欧美激情成人 | 亚洲综合网址 | 中文字幕成人在线 | 中文字字幕在线观看 | 免费观看黄色网址 | 久久91精品 | 国产三级欧美三级日产三级99 | 星空无限mv国产剧入选 | 婷婷射图 | 国内自拍99| 久久伊人网站 | 亚洲一级性 | 男人天堂免费视频 | 激情二区| 一级片黄片毛片 | 亚洲精品va| 毛片基地免费 | 爱看av在线 | www午夜| 日本三级视频在线 | 欧美成人精品在线观看 | 狠狠干2020| 日日草| 欧美激情亚洲综合 | 久久综合导航 | 艳母在线视频 | av一道本| 91精品久久久久久 | 国产精品自在在线午夜出白浆 | 黄色免费在线看 | 国产精品午夜影院 | 91本色 | 日韩成人av网站 | 播放一级黄色片 | 久久av偷拍 | 天天舔天天爱 | 久天堂 | 91视频免费看 | 色综合五月 | 国产一区二区三区亚洲 | 日一区二区 | 成人短视频在线免费观看 | 成人免费在线观看网站 | 悠悠色综合| 亚州视频一区二区三区 | 蝌蚪网在线视频 | 成人毛片a| 日本一区二区三区精品视频 | 麻豆一区二区三区四区 | 色多多在线视频 | 97久色| 91 色| ,午夜性刺激免费看视频 | 在线视频欧美一区 | 亚洲我不卡| 国产亚洲欧美在线 | 神马午夜一区二区 | 男女搞黄网站 | 成人免费午夜视频 | 在线亚洲免费 | 国产3级| 在线亚洲天堂 | 国产不卡视频一区二区三区 | 精品一区二区免费视频 | 一级黄色a视频 | 爱操在线 | 羞羞的软件 | 91狠狠| 亚洲一区二区在线看 | 在线观看视频一区二区三区 | 亚洲成色999久久网站 | 免费网站观看www在线观看 | 性一交一乱一区二区洋洋av | 中文字幕日韩精品视频一区视频二区 | 色伊人久久 | 国产精品一级 | 超级黄色录像 | 午夜黄色在线观看 | 欧洲一区二区在线观看 | 午夜影院a | 亚洲aⅴ网站 | 日本一级网站 | a网站在线观看 | 91免费版黄| 欧美一级在线视频 | 国产精品久久久久久久久久精爆 | 国产在线高清 | 日韩免费视频一区二区 | 国产色呦呦 | 国产一级视频在线播放 | 欧美性猛交乱大交3 | av资源导航 | 伊人性 | 999www| 欧美一级视频免费观看 | 好吊视频一二三区 | 极品色综合 | 黄色片网站在线观看 | 精品毛片 | 91精品国产综合久久久蜜臀 | 国产丝袜视频 | 亚洲图片小说视频 | 亚洲精品网站在线播放gif | 色噜噜日韩精品欧美一区二区 | 成人特级片 | 99色国产| 欧美性xxxxx极品娇小 | 亚洲专区一区二区三区 | 青青伊人网 | 一本一本久久a久久精品综合麻豆 | 一二三区av | 毛片黄片免费看 | av中文有码 | a天堂视频在线 | 91视频啪啪 | 天天操天天干天天爱 | 国产成人一区二区三区 | 天天色一色 | 色综合久久久 | 最新视频在线观看 | 黄色一级大片 | 肉色超薄丝袜脚交一区二区 | 日韩欧美v | 撸啊撸av | 涩涩综合| 888奇米影视 | 四虎永久在线精品免费一区二区 | 国内自拍第一页 | 久久精品噜噜噜成人av农村 | 亚洲系列在线 | 韩国一级淫一片免费放 | 欧美性xxxxx极品娇小 | 欧美日韩亚洲二区 | 国语自产少妇精品视频 | 一区二区三区亚洲精品国 | 欧美一区二区在线播放 | 成人在线黄色 | 自拍亚洲欧美 | 国产裸体永久免费视频网站 | 自拍偷拍亚洲第一 | 日韩av中文在线 | 日本吃奶摸下激烈网站动漫 | 国产一在线 | 在线观看成人黄av免费 | 国产原创视频在线观看 | 久久1234 | 欧美淫 | 国产精品久久777777换脸 | 亚洲爽片 | 在线va视频 | 亚洲欧美日韩精品色xxx | 免费一级欧美片在线播放 | 亚洲黄色片子 | 日韩av一区在线观看 | 国内精品一级片 | 一级黄色激情片 | 红桃视频国产精品 | 免费黄色一级视频 | 四虎影院免费视频 | 国产高清久久久 | 免费a视频在线观看 | 一级黄色免费毛片 | 97成人免费视频 | 美女涩涩网站 | 国产色噜噜噜在线观看精品 | 成年视频在线观看 | 午夜视频在线观看一区 | 欧美激情一区 | 99re中文字幕 | 色播久久| 日日干夜夜骑 | 色天天| 以女性视角写的高h爽文 | 最近中文字幕在线免费观看 | 免费观看视频在线观看 | 日韩欧美成 | 亚洲91视频 | 中文天堂在线资源 | 国产美女精品一区二区 | 性一交一乱一色一视频麻豆 | 国产视频在线一区二区 | 久久新视频 | 亚洲精品一级片 | 日日操日日摸 | 免费日韩精品 | 国产精品区一区二 | 欧美成人hd| 久久亚洲一区 | 亚洲精品水蜜桃 | 国产吧在线 | 91精品国产麻豆国产自产在线 | 亚洲欧美日韩国产 | 深夜av在线 | 一级二级三级视频 | 国产色在线视频 | 久久精久久 | 黑丝av在线播放 | 日韩欧美一区二区在线 | 香蕉av网| 重囗味sm一区二区三区 | 婷婷丁香六月 | 亚洲天堂视频网 | 亚洲国产欧美在线观看 | 色av资源| 毛片毛片女人毛片毛片 | 国产一区二区三区在线看 | www.日韩.com | 少妇性色av | 黄色片国产 | 91在线观看网站 | 黄色小说视频网站 | 奇米777视频| 欧美精品一区二 | 国产欧美视频一区二区三区 | 中文字幕丝袜 | 成人a√| 国产黄色大全 | 男女三级视频 | 成人网在线 | 日韩第一页 | 在线观看毛片的网站 | 日韩一级黄色片 | 日韩欧美综合一区 | 好吊操av| 中文字字幕码一二三区 | 欧美综合另类 | 青娱乐国产在线 | 两美女女同激情舌吻 | 日本xxxx18高清hd | 亚洲天堂视频在线观看 | 国产福利小视频在线 | 黄色无毒网站 | 欧美色图88| 免费网站www在线观看 | 色www.| 曰批视频在线观看 | 久久视频在线免费观看 | 欧美日韩中文在线 | 日韩精品五区 | 天天想夜夜操 | www亚洲| 亚洲天天操 | 日本一区二区三区四区在线观看 | 久久久午夜精品 | 嫩草av91| 亚洲深夜| 欧美精品免费在线观看 | 国产成人精品a视频一区 | 久久综合av | 欧美日韩亚洲系列 | 少妇一区二区视频 | 亚洲欧美日韩在线播放 | 中文字幕在线视频一区 | 欧美日韩va | 亚洲一区二区视频在线 | 亚洲精品666 | 国产无遮挡又黄又爽又色视频 | 一级片免费观看视频 | 色哟哟国产精品 | 北岛玲一区二区 | 国产色拍 | 欧美xxxx黑人 | 成人在线视频一区 | 亚洲人在线观看 | 精品小视频 | 国产在线看片 | 一区二区三区日 | 精品欧美久久 | 亚洲网站在线播放 | 中文av字幕 | 一区二区伊人 | 国产精品羞羞答答在线观看 | 国产一级淫片a | 久久96 | 亚洲国产成人欧美激情 | 亚洲日本天堂 | 日韩欧美大片在线观看 | 三上悠亚激情av一区二区三区 | 黄色av不卡 | 99成人精品 | 国产无人区码熟妇毛片多 | 日韩av免费 | 黄色片子网站 | 国产又粗又黄 | 国产精品免费无遮挡 | 黄色a在线| 波多野结衣福利 | 欧美a级免费 | 黄瓜视频色版 | 国产成人精品久久 | 毛片基地免费 | 中文字幕在线观看第一页 | 成人黄色小视频在线观看 | 黄色一级片网站 | 国产爱搞| 中文字幕在线一区 | 美日韩毛片 | av黄色在线观看 | 青青草av在线播放 | 久久午夜鲁丝片午夜精品 | 亚洲a成人 | 青青草手机在线观看 | 欧美激情视频一区二区 | 欧美一区二区在线观看视频 | 青草视频免费在线观看 | 国产呦系列 | 国产日产精品一区二区三区 | 日日干av | 日本xxx在线观看 | 国产久精品 | 久久久精品日本 | 日b免费视频 | www.久久爱 | 精品一区二区三区蜜桃 | 九九精品久久 | 国产精品久久久久久久不卡 | 中文字幕日韩欧美一区二区三区 | 久久九九国产 | 国产精品毛片一区二区 | 中文在线一区二区 | 性色av一区二区三区 | 天天视频色 | 久久精品色 | 香蕉亚洲 | 亚洲黄色免费看 | 五月婷婷六月激情 | 亚洲a在线观看 | 狠狠草视频 | 天天操天天做 | 综合视频 | 一级毛毛片| 日本最新中文字幕 | 日不卡 | 天天草影院 | 亚洲天堂网在线观看视频 | caoprom在线| 丰满少妇一区二区三区 | 天天综合亚洲 | 国产69精品久久久 | 欧洲综合视频 | 99国产精品久久久久久久日本竹 | 久久不卡影院 | 欧洲一级视频 | 欧美一区二区三区激情啪啪 | 日韩激情网站 | 一区二区三区视频网站 | 黄色一级免费看 | 久久网站免费 | 免费成人蒂法网站 | 色噜噜一区二区三区 | 香蕉视频首页 | 97人人爽人人爽人人爽人人爽 | 久久久久久久久久成人 | 国产中文综合免费 | 国产影片中文字幕 | 五月婷婷,六月丁香 | 国产一级生活片 | 冲田杏梨一区二区三区 | 婷婷久草 | 天天射综合网站 | 午夜精品久久久久久久久久蜜桃 | 国产成人综合在线 | 成人免费看片又大又黄 | 伊人手机在线视频 | 狠狠干狠狠做 | 日韩免费网 | 一级黄色大片网站 | 日韩乱码一区二区三区 | 爆操av| 香蕉成人臿臿在线观看 | 伊是香蕉大人久久 | 午夜精品视频 | 国产成人免费网站 | 成人性生交大片 | 日韩精品一区二区在线播放 | 成年人性生活免费视频 | 国产激情免费 | 日韩在线播放中文字幕 | 在线色亚洲 | 中文字幕久久久久久久 | caopeng在线| 六月激情网 | 国产精品999久久久 国产999精品久久久久久 | 成人综合影院 | 欧美狠狠操 | 天堂av影院 | 日韩伊人 | 欧美日韩久久久 | 久草99| 欧美在线免费观看 | 超碰在线观看97 | 在线伊人| 天天艹夜夜 | 一级裸体片 | 另类欧美亚洲 | 久久精久久| 精品视频网站 | 在线观看免费福利 | 成人动漫视频在线观看 | 伊人天天| 国产一区色| 久久久久久久毛片 | 免费av观看| 91av在线网站 | 日本高清视频一区 | 99久久免费精品国产免费高清 | 亚洲精品久久久乳夜夜欧美 | 91视频国产免费 | 亚洲制服无码 | 欧美高清视频在线观看 | 色图色小说 | 在线视频国产一区 | 白浆一区 | 制服丝袜天堂 | 懂色av一区二区在线播放 | 黑人操亚洲女人 | 干一干操一操 | 夜夜成人 | 日韩午夜一区 | 黄色伊人网| 在线不卡免费视频 | 91偷拍网站 | 日本不卡视频在线播放 | 欧美亚洲三级 | 中文字幕在线播放视频 | 国产视频一区二区在线 | 日日干天天射 | 国产探花一区 | 久操视频免费 | 在线观看成人免费视频 | www天天干 | 在线免费黄 | 一二三区在线视频 | 欧美激情一区二区视频 | 黄色一及大片 | 亚洲超碰在线观看 | 在线三区| 久草视频手机在线观看 | 69热在线观看 | 黄色网页在线看 | 日韩黄色免费 | 欧美999| 国产伦精品一区二区三区视频我 | 色妻av| 国产精品成人久久久久久久 | 国产综合在线视频 | 亚洲精品成人av | 欧美色图五月天 | 白浆导航| 偷偷操视频 | 亚洲欧美日韩国产一区二区 | 91一区| 精品白浆| 天堂在线观看中文字幕 | 久草福利在线视频 | 日韩欧美视频在线免费观看 | 亚洲专区一区二区三区 | av在线免费网址 | 狠狠操狠狠摸 | 视频在线一区二区 | 91九色porn| 荔枝视频污| www.青青操| 国产天堂av| 亚洲视频在线一区二区 | 美女视频在线免费观看 | 黄色三级小视频 | 国产99久久久国产精品成人免费 | 午夜综合| 国产激情视频在线 | 欧美一区日韩一区 | 黑人日批视频 | 亚洲视频在线视频 | 91成人看 | 国产网站大全 | 久久久精品久久久久 | 日韩经典三级 | 亚洲一区二区免费看 | 亚洲乱码国产乱码精品精的特点 | 日韩欧美三级在线观看 | 欧美日韩中文字幕一区 | 一级片美女 | 日韩视频欧美视频 | 欧美日韩免费看 | 精品国产精品 | 欧美第一页草草影院 | 91传媒视频在线观看 | 中文字幕5页 | 色倩网站 | 午夜一级黄色大片 | 国产午夜麻豆影院在线观看 | 日本一级一片免费视频 | jizz一区 | 欧美一区二区在线观看视频 | 男人的天堂在线视频 | 91久久久久久 | 免费人成年激情视频在线观看 | 在线日韩一区二区 | 91麻豆国产精品 | 日本成人片网站 | 免费看黄色av | 中文字幕不卡视频 | av少妇在线 | 狠狠香蕉| 福利一区三区 | 国产午夜免费视频 | 特级毛片www | 亚洲第一二三区 | 操操操日日日 | 日韩一区二区精品视频 | 色综合久久久久久 | 国产传媒专区 | 中文字幕av二区 | 欧美日韩中文字幕在线观看 | 亚洲无吗在线观看 | 特级淫片裸体免费看冫 | 九色在线| 亚洲国产播放 | 男女啪啪十八 | www.黄色在线 | www.日韩在线观看 | 在线视频三区 | 在线观看国产免费av | 成人午夜在线观看 | 欧美激情图区 | 成人黄色免费网站在线观看 | 欧美a大片| 男人的亚洲天堂 | 国产精品igao视频网免费播放 | 91极品国产 | 日日干综合 | 久久久久久久久久91 | 亚洲毛片精品 | 久久爱综合 | 天天精品视频 | 视频一区二区欧美 | 天天摸日日干 | 美日韩在线视频 | 精品精品| 亚洲一线二线三线久久久 | 狂野欧美性猛交xxxx777 | 国语对白永久免费 | 亚洲4438| 波多野结衣视频在线 | 性久久久久久 | 色老头在线视频 | 精品国产乱码久久久久久88av | 精品国产乱码久久久久久影片 | 超碰www| 日本吃奶摸下激烈网站动漫 | 日韩在线二区 | 日韩一区二区三区在线观看 | 亚洲精品久久久蜜桃 | 成人毛片a| 久久久国产精品成人免费 | 夜夜夜操 | 午夜黄色在线观看 | 亚洲 国产 另类 精品 专区 | 日日干夜夜撸 | 超碰超碰超碰超碰 | 欧美日本国产 | 一区二区网 | 日韩毛片一区二区三区 | av免费在线观看不卡 | 亚洲精品福利网 | 亚洲一级成人 | 一本色道久久加勒比精品 | 色狠狠一区二区 | 日本在线观看www | 午夜在线网址 | 国产亚洲成av人在线观看导航 | 自拍偷拍国产 | 国产美女无遮挡免费看 | 青青草原综合久久大伊人精品 | 日韩黄色视屏 | 日韩欧美一二三区 | 黄色三级视频 | 2019中文字幕在线视频 | 2000xxx影院 在线视频 | 成人片在线播放 | 三上悠亚久久 | 在线播放91 | 一级黄色高清 | 国产精品制服诱惑 | 在线观看视频一区二区三区 | 亚洲图区综合网 | 91久久久久久久久久久久久 | 国产精品不卡av | 欧美亚洲国产一区 | 深夜福利一区 | 黄色av影院| 在线观看中文字幕第一页 | www裸玉足久久久 | 国产亚洲一区精品 | 99精品视频99 | 成人深夜福利视频 | 国产欧美日本在线 | 日韩第一色 | 国产伦一区二区三区 | 日韩一级片免费观看 | 国产丰满美女做爰 | 欧美xxxx性| 欧洲视频一区二区 | 久久久极品 | www.波多野结衣.com | xxxxx18日本| 一级高清黄色片 | 久久精品成人一区二区三区蜜臀 | 亚洲激情中文字幕 | 久久精品性 | 欧美做受xxxxxⅹ性视频 | 亚洲自拍偷拍在线 | 亚洲在线中文字幕 | www网站在线免费观看 | 成人免费高清视频 | 一区二区三区视频免费 | 在线视频中文字幕 | 日日夜夜爽| 国产乱码av | 超碰人人射 | 色综合久久天天综合网 | 亚洲一二三区视频 | 精品国产乱码久久久久久蜜臀网站 | 最近免费中文字幕大全免费版视频 | 一区二区观看 | 国产精品日韩欧美大师 | aav在线| 欧美久久久久久久 | 亚洲产国偷v产偷自拍网址 亚洲成色777777女色窝 | 午夜一级视频 | 日韩在线高清 | 欧美日韩aaa| 理论片av | 久久国产九九 | 久久精品av | 毛片网站免费在线观看 | 中文精品视频 | 香蕉色综合 | 欧美bbbbbbbbbbbb精品 | 综合网视频 | 黄色成人av网站 | 国产乱妇4p交换乱免费视频 | 欧美日韩网站 | www.猫咪av| 不卡视频免费在线观看 | 国产裸体视频 | 久久久久久久久久免费 | 中文字幕网站在线观看 | 一色道久久88加勒比一 | 99精品视频在线观看 | 真实国产乱啪福利露脸 | 日本国产在线视频 | 国产精品久久久久久久久免费看 | 中文字幕在线欧美 | 黄色三级国产 | 久久国产视频播放 | 国产高潮失禁喷水爽到抽搐 | 亚洲精品www久久久 成人午夜视频在线免费观看 | 免费成人深夜小野草 | 成人午夜影院 | youjizz日本人 | 日本少妇三级 | 日韩成人在线播放 | 毛片动漫| 亚洲h视频 | 久久亚洲精品石原莉奈 | 精品久久久久一区二区国产 | 神马久久久久久久 | 久久男人精品 | 国产精品久久久久久久久久久久午夜 | 悠悠av| 1024欧美| 一区二区国产在线观看 | 黄色九九 | 摸摸摸bbb毛毛毛片 熊猫成人网 | 久久久久久久久久免费 | 成人黄网免费观看视频 | www.波多野结衣.com | 国产小视频网站 | 夜夜综合网 | 国产精成人 | 久久久夜色 | 成人性生交大全免 | 波多在线视频 | 亚洲五月花 | 91福利网站| 国产男女视频 | 伊人av综合 | 欧美中文在线观看 | 最近日本字幕mv免费观看在线 | 午夜综合 | 成人免费视频大全 | 日本久久久久 | 国产在线高清 | 在线精品一区二区 | 国产专区一区 | 久热国产精品 | 一本大道久久精品懂色aⅴ 久久久久久亚洲欧洲 | 男女视频免费观看 | jizz国产免费 | 一本到免费视频 | 香蕉视频污在线观看 | 手机在线看片1024 | 中国一级女人毛片 | 国产精品伦一区二区 | www超碰在线| 欧洲视频一区 | 黄色免费观看网站 | 国产久精品| 在线艹 | 天天夜夜爽 | 久草新在线 | 美女一区二区三区 | 久久久久三级 | av福利网| 日本不卡高清视频 | 激情六月 | 操欧美女人 | 91亚色| 亚洲香蕉| 中文字幕免费在线播放 | 久草黄色 | 国产精品久久久久久久久久久新郎 | 亚洲欧美日韩在线看 | 日本网站在线免费观看 | 91在线网址 | 国产大片中文字幕 | 亚洲欧美激情精品一区二区 | 午夜精品久久久久久久久久 | 四季av一区二区凹凸精品 | 国产婷婷色综合av蜜臀av | 欧美色啪 | 日韩av网页 | a毛片在线 | 99热精品在线观看 | 天天干天天爱天天操 | 伊人网av在线 | 日本xxx在线观看 | 呦女精品 | 日本成人中文字幕 | 偷拍精品一区二区三区 | 97毛片 | 台湾swag在线播放 | 视频一区国产精品 | 亚洲97视频 | 国产精品二区在线 | 综合婷婷久久 | 日本国产在线观看 | 99久久毛片 | wwwwxxxxx日本| 亚欧美视频 | 91精品视频在线 | 欧美日韩色 | 国产呻吟久久久久久久92 | 久久久加勒比 | 中文字幕第三页 | 四虎少妇做爰免费视频网站四 | 亚洲成人久久精品 | 午夜激情视频网站 | 爱爱视频在线免费观看 | 亚洲欧洲综合在线 | аⅴ资源新版在线天堂 | 黑人vs亚洲人在线播放 | 成人av激情网| www国产在线| 91好色先生tv | 日韩av中文字幕在线 | 精品中文字幕在线 | 四虎精品在线 | 天天干,夜夜爽 | 国外亚洲成av人片在线观看 | 久久精品视频免费 | 6080日韩午夜伦伦午夜伦 | 一区二区视频在线 | 日本激情视频网站 | 九九视频免费在线观看 | 亚洲视频一区在线 | 欧美日韩五区 | 精品日韩| 国产性生活视频 | 亚洲综合在线色 | 亚洲第一网站 | 天堂在线中文网 | 日本亚洲最大的色成网站www | 亚洲最大黄色网址 | 探花一区 | 一级成人欧美一区在线观看 | 欧美性生活一区 | 中文字幕第9页 | 黄色香蕉视频 |