狠狠综合久久久久综合网址-a毛片网站-欧美啊v在线观看-中文字幕久久熟女人妻av免费-无码av一区二区三区不卡-亚洲综合av色婷婷五月蜜臀-夜夜操天天摸-a级在线免费观看-三上悠亚91-国产丰满乱子伦无码专区-视频一区中文字幕-黑人大战欲求不满人妻-精品亚洲国产成人蜜臀av-男人你懂得-97超碰人人爽-五月丁香六月综合缴情在线

COM6511代寫、Python語言編程代做

時間:2024-05-09  來源:  作者: 我要糾錯



COM4511/COM6511 Speech Technology - Practical Exercise -
Keyword Search
Anton Ragni
Note that for any module assignment full marks will only be obtained for outstanding performance that
goes well beyond the questions asked. The marks allocated for each assignment are 20%. The marks will be
assigned according to the following general criteria. For every assignment handed in:
1. Fulfilling the basic requirements (5%)
Full marks will be given to fulfilling the work as described, in source code and results given.
2. Submitting high quality documentation (5%)
Full marks will be given to a write-up that is at the highest standard of technical writing and illustration.
3. Showing good reasoning (5%) Full marks will be given if the experiments and the outcomes are explained to the best standard.
4. Going beyond what was asked (5%)
Full marks will be given for interesting ideas on how to extend work that are well motivated and
described.
1 Background
The aim of this task is to build and investigate the simplest form of a keyword search (KWS) system allowing to find information
in large volumes of spoken data. Figure below shows an example of a typical KWS system which consists of an index and
a search module. The index provides a compact representation of spoken data. Given a set of keywords, the search module
Search Results
Index
Key− words
queries the index to retrieve all possible occurrences ranked according to likelihood. The quality of a KWS is assessed based
on how accurately it can retrieve all true occurrences of keywords.
A number of index representations have been proposed and examined for KWS. Most popular representations are derived
from the output of an automatic speech recognition (ASR) system. Various forms of output have been examined. These differ
in terms of the amount of information retained regarding the content of spoken data. The simplest form is the most likely word
sequence or 1-best. Additional information such as start and end times, and recognition confidence may also be provided for
each word. Given a collection of 1-best sequences, the following index can be constructed
w1 (f1,1, s1,1, e1,1) . . . (f1,n1 , s1,n1 , e1,n1 )
w2 (f1,1, s1,1, e1,1) . . . (f1,n1 , s1,n1 , e1,n1 )
.
.
.
wN (fN,1, sN,1, eN,1) . . . (fN,nN , sN,nN , eN,nN )
(1)
1
where wi is a word, ni is the number of times word wi occurs, fi,j is a file where word wi occurs for the j-th time, si,j and ei,j
is the start and end time. Searching such index for single word keywords can be as simple as finding the correct row (e.g. k)
and returning all possible tuples (fk,1, sk,1, ek,1), . . ., (fk,nk , sk,nk , ek,nk ).
The search module is expected to retrieve all possible keyword occurrences. If ASR makes no mistakes such module
can be created rather trivially. To account for possible retrieval errors, the search module provides each potential occurrence
with a relevance score. Relevance scores reflect confidence in a given occurrence being relevant. Occurrences with extremely
low relevance scores may be eliminated. If these scores are accurate each eliminated occurrence will decrease the number of
false alarms. If not then the number of misses will increase. What exactly an extremely low score is may not be very easy
to determine. Multiple factors may affect a relevance score: confidence score, duration, word confusability, word context,
keyword length. Therefore, simple relevance scores, such as those based on confidence scores, may have a wide dynamic range
and may be incomparable across different keywords. In order to ensure that relevance scores are comparable among different
keywords they need to be calibrated. A simple calibration scheme is called sum-to-one (STO) normalisation
rˆi,j = r
γ
 
i,j
ni
k=1 r
γ
i,k
(2)
where ri,j is an original relevance score for the j-th occurrence of the i-th keyword, γ is a scale enabling to either sharpen or
flatten the distribution of relevance scores. More complex schemes have also been examined. Given a set of occurrences with
associated relevance scores, there are several options available for eliminating spurious occurrences. One popular approach
is thresholding. Given a global or keyword specific threshold any occurrence falling under is eliminated. Simple calibration
schemes such as STO require thresholds to be estimated on a development set and adjusted to different collection sizes. More
complex approaches such as Keyword Specific Thresholding (KST) yield a fixed threshold across different keywords and
collection sizes.
Accuracy of KWS systems can be assessed in multiple ways. Standard approaches include precision (proportion of relevant retrieved occurrences among all retrieved occurrences) and recall (proportion of relevant retrieved occurrences among all
relevant occurrences), mean average precision and term weighted value. A collection of precision and recall values computed
for different thresholds yields a precision-recall (PR) curve. The area under PR curve (AUC) provides a threshold independent summative statistics for comparing different retrieval approaches. The mean average precision (mAP) is another popular,
threshold-independent, precision based metric. Consider a KWS system returning 3 correct and 4 incorrect occurrences arranged according to relevance score as follows: ✓ , ✗ , ✗ , ✓ , ✓ , ✗ , ✗ , where ✓ stands for correct occurrence and ✗ stands
for incorrect occurrence. The average precision at each rank (from 1 to 7) is 1
1 , 0
2 , 0
3 , 2
4 , 3
5 , 0
6 , 0
7 . If the number of true correct
occurrences is 3, the mean average precision for this keyword 0.7. A collection-level mAP can be computed by averaging
keyword specific mAPs. Once a KWS system operates at a reasonable AUC or mAP level it is possible to use term weighted
value (TWV) to assess accuracy of thresholding. The TWV is defined by
TWV(K, θ) = 1 −
 
1
|K|
 
k∈K
Pmiss(k, θ) + βPfa(k, θ)
 
(3)
where k ∈ K is a keyword, Pmiss and Pfa are probabilities of miss and false alarm, β is a penalty assigned to false alarms.
These probabilities can be computed by
Pmiss(k, θ) = Nmiss(k, θ)
Ncorrect(k) (4)
Pfa(k, θ) = Nfa(k, θ)
Ntrial(k) (5)
where N<event> is a number of events. The number of trials is given by
Ntrial(k) = T − Ncorrect(k) (6)
where T is the duration of speech in seconds.
2 Objective
Given a collection of 1-bests, write a code that retrieves all possible occurrences of keyword list provided. Describe the search
process including index format, handling of multi-word keywords, criterion for matching, relevance score calibration and
threshold setting methodology. Write a code to assess retrieval performance using reference transcriptions according to AUC,
mAP and TWV criteria using β = 20. Comment on the difference between these criteria including the impact of parameter β.
Start and end times of hypothesised occurrences must be within 0.5 seconds of true occurrences to be considered for matching.
2
3 Marking scheme
Two critical elements are assessed: retrieval (65%) and assessment (35%). Note: Even if you cannot complete this task as a
whole you can certainly provide a description of what you were planning to accomplish.
1. Retrieval
1.1 Index Write a code that can take provided CTM files (and any other file you deem relevant) and create indices in
your own format. For example, if Python language is used then the execution of your code may look like
python index.py dev.ctm dev.index
where dev.ctm is an CTM file and dev.index is an index.
Marks are distributed based on handling of multi-word keywords
• Efficient handling of single-word keywords
• No ability to handle multi-word keywords
• Inefficient ability to handle multi-word keywords
• Or efficient ability to handle multi-word keywords
1.2 Search Write a code that can take the provided keyword file and index file (and any other file you deem relevant)
and produce a list of occurrences for each provided keyword. For example, if Python language is used then the
execution of your code may look like
python search.py dev.index keywords dev.occ
where dev.index is an index, keywords is a list of keywords, dev.occ is a list of occurrences for each
keyword.
Marks are distributed based on handling of multi-word keywords
• Efficient handling of single-word keywords
• No ability to handle multi-word keywords
• Inefficient ability to handle multi-word keywords
• Or efficient ability to handle multi-word keywords
1.3 Description Provide a technical description of the following elements
• Index file format
• Handling multi-word keywords
• Criterion for matching keywords to possible occurrences
• Search process
• Score calibration
• Threshold setting
2. Assessment Write a code that can take the provided keyword file, the list of found keyword occurrences and the corresponding reference transcript file in STM format and compute the metrics described in the Background section. For
instance, if Python language is used then the execution of your code may look like
python <metric>.py keywords dev.occ dev.stm
where <metric> is one of precision-recall, mAP and TWV, keywords is the provided keyword file, dev.occ is the
list of found keyword occurrences and dev.stm is the reference transcript file.
Hint: In order to simplify assessment consider converting reference transcript from STM file format to CTM file format.
Using indexing and search code above obtain a list of true occurrences. The list of found keyword occurrences then can
be assessed more easily by comparing it with the list of true occurrences rather than the reference transcript file in STM
file format.
2.1 Implementation
• AUC Integrate an existing implementation of AUC computation into your code. For example, for Python
language such implementation is available in sklearn package.
• mAP Write your own implementation or integrate any freely available.
3
• TWV Write your own implementation or integrate any freely available.
2.2 Description
• AUC Plot precision-recall curve. Report AUC value . Discuss performance in the high precision and low
recall area. Discuss performance in the high recall and low precision area. Suggest which keyword search
applications might be interested in a good performance specifically in those two areas (either high precision
and low recall, or high recall and low precision).
• mAP Report mAP value. Report mAP value for each keyword length (1-word, 2-words, etc.). Compare and
discuss differences in mAP values.
• TWV Report TWV value. Report TWV value for each keyword length (1-word, 2-word, etc.). Compare and
discuss differences in TWV values. Plot TWV values for a range of threshold values. Report maximum TWV
value or MTWV. Report actual TWV value or ATWV obtained with a method used for threshold selection.
• Comparison Describe the use of AUC, mAP and TWV in the development of your KWS approach. Compare
these metrics and discuss their advantages and disadvantages.
4 Hand-in procedure
All outcomes, however complete, are to be submitted jointly in a form of a package file (zip/tar/gzip) that includes
directories for each task which contain the associated required files. Submission will be performed via MOLE.
5 Resources
Three resources are provided for this task:
• 1-best transcripts in NIST CTM file format (dev.ctm,eval.ctm). The CTM file format consists of multiple records
of the following form
<F> <H> <T> <D> <W> <C>
where <F> is an audio file name, <H> is a channel, <T> is a start time in seconds, <D> is a duration in seconds, <W> is a
word, <C> is a confidence score. Each record corresponds to one recognised word. Any blank lines or lines starting with
;; are ignored. An excerpt from a CTM file is shown below
7654 A 11.34 0.2 YES 0.5
7654 A 12.00 0.34 YOU 0.7
7654 A 13.30 0.5 CAN 0.1
• Reference transcript in NIST STM file format (dev.stm, eval.stm). The STM file format consists of multiple records
of the following form
<F> <H> <S> <T> <E> <L> <W>...<W>
where <S> is a speaker, <E> is an end time, <L> topic, <W>...<W> is a word sequence. Each record corresponds to
one manually transcribed segment of audio file. An excerpt from a STM file is shown below
2345 A 2345-a 0.10 2.03 <soap> uh huh yes i thought
2345 A 2345-b 2.10 3.04 <soap> dog walking is a very
2345 A 2345-a 3.50 4.59 <soap> yes but it’s worth it
Note that exact start and end times for each word are not available. Use uniform segmentation as an approximation. The
duration of speech in dev.stm and eval.stm is estimated to be 57474.2 and 25694.3 seconds.
• Keyword list keywords. Each keyword contains one or more words as shown below
請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




















 

標簽:

掃一掃在手機打開當前頁
  • 上一篇:EBU6304代寫、Java編程設計代做
  • 下一篇:COM4511代做、代寫Python設計編程
  • 無相關信息
    昆明生活資訊

    昆明圖文信息
    蝴蝶泉(4A)-大理旅游
    蝴蝶泉(4A)-大理旅游
    油炸竹蟲
    油炸竹蟲
    酸筍煮魚(雞)
    酸筍煮魚(雞)
    竹筒飯
    竹筒飯
    香茅草烤魚
    香茅草烤魚
    檸檬烤魚
    檸檬烤魚
    昆明西山國家級風景名勝區
    昆明西山國家級風景名勝區
    昆明旅游索道攻略
    昆明旅游索道攻略
  • NBA直播 短信驗證碼平臺 幣安官網下載 歐冠直播 WPS下載

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 kmw.cc Inc. All Rights Reserved. 昆明網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    狠狠综合久久久久综合网址-a毛片网站-欧美啊v在线观看-中文字幕久久熟女人妻av免费-无码av一区二区三区不卡-亚洲综合av色婷婷五月蜜臀-夜夜操天天摸-a级在线免费观看-三上悠亚91-国产丰满乱子伦无码专区-视频一区中文字幕-黑人大战欲求不满人妻-精品亚洲国产成人蜜臀av-男人你懂得-97超碰人人爽-五月丁香六月综合缴情在线
  • <dl id="akume"></dl>
  • <noscript id="akume"><object id="akume"></object></noscript>
  • <nav id="akume"><dl id="akume"></dl></nav>
  • <rt id="akume"></rt>
    <dl id="akume"><acronym id="akume"></acronym></dl><dl id="akume"><xmp id="akume"></xmp></dl>
    日日躁夜夜躁aaaabbbb| 亚洲熟妇国产熟妇肥婆| 九一精品久久久| www.日本久久| 天天综合中文字幕| 色欲色香天天天综合网www| 无码 制服 丝袜 国产 另类| 亚洲一区二区三区av无码| 国产免费毛卡片| 亚洲天堂网2018| 成人免费在线视频播放| 免费在线观看亚洲视频 | 男操女免费网站| 日本一二三区在线| 日韩一级片免费视频| 黑鬼大战白妞高潮喷白浆| 在线观看岛国av| 人体内射精一区二区三区| 亚洲综合在线网站| www.亚洲成人网| 亚洲性生活网站| 国产91沈先生在线播放| 午夜两性免费视频| 欧美一级片免费播放| 午夜啪啪小视频| 精品少妇一区二区三区在线| 天天干天天av| 久草热视频在线观看| 精品国产无码在线| aⅴ在线免费观看| 久久男人资源站| 日日夜夜精品视频免费观看| av网址在线观看免费| 男女激烈动态图| 国产一区二区在线免费播放| 波多野结衣之无限发射| 日本成人性视频| 日韩av片网站| 久久久精品在线视频| 成人免费在线视频播放| av动漫免费观看| 五月天婷婷激情视频| 日本中文字幕网址| av影院在线播放| 1314成人网| 亚洲综合婷婷久久| 中国黄色片免费看| 91网址在线播放| 国产黄页在线观看| 久久久久久久久久网| 少妇高潮大叫好爽喷水| www.午夜色| 中文字幕22页| 邪恶网站在线观看| 黄大色黄女片18第一次| 黑森林精品导航| 三上悠亚在线一区二区| 国内自拍视频一区| 爱情岛论坛成人| av网站在线不卡| 中文字幕第100页| xxx国产在线观看| 自拍偷拍21p| 国产探花在线观看视频| 欧洲xxxxx| 女人被男人躁得好爽免费视频| 7777在线视频| 欧美a级免费视频| 日韩精品在线中文字幕| 婷婷无套内射影院| 久久国产精品网| 激情视频综合网| 欧美一级视频在线| 国产麻豆电影在线观看| 国产精品久久久久7777| 日韩毛片在线免费看| 成年人在线观看视频免费| 艹b视频在线观看| 潘金莲一级淫片aaaaa免费看| 国产在线拍揄自揄拍无码| 国产精品网站免费| 久久久精品麻豆| 欧美与动交zoz0z| 大陆极品少妇内射aaaaa| 99草草国产熟女视频在线| 91pony九色| 久久综合久久久久| 日韩在线第三页| 青青草原国产免费| 人妻有码中文字幕| 日本一本在线视频| 欧美日韩黄色一级片| 色戒在线免费观看| www精品久久| 激情黄色小视频| 精品无码一区二区三区爱欲| 天天爽天天爽夜夜爽| 激情五月六月婷婷| 色一情一区二区三区| 国产xxxx振车| 中文字幕天天干| a级黄色小视频| 爽爽爽在线观看| 精品中文字幕av| 中国 免费 av| 日本三级黄色网址| 久久久久免费看黄a片app| 一二三级黄色片| 亚洲人成无码www久久久| 国产在线无码精品| 国产永久免费网站| 无码少妇一区二区三区芒果| 波多野结衣 作品| 国产精品av免费| www.se五月| 无限资源日本好片| 69堂免费视频| 免费一级特黄毛片| 国产女教师bbwbbwbbw| 亚洲一区二区偷拍| 超碰在线97免费| 最近免费中文字幕中文高清百度| 屁屁影院ccyy国产第一页| 亚欧精品在线视频| 国产aⅴ爽av久久久久| 中文字幕网av| 一区二区三区视频在线观看免费| 国模吧无码一区二区三区| 一二三四视频社区在线| 成人免费a级片| 国产传媒久久久| 精品一二三四五区| 国产尤物av一区二区三区| 伊人久久在线观看| 欧美 亚洲 视频| 野外做受又硬又粗又大视频√| 在线观看成人免费| 亚洲中文字幕无码一区二区三区| 深爱五月综合网| 无码毛片aaa在线| 免费视频爱爱太爽了| 岛国大片在线播放| 成人在线免费观看av| 精品少妇无遮挡毛片| 一本色道久久亚洲综合精品蜜桃 | av免费观看网| 日韩av在线综合| 欧美成年人视频在线观看| 色婷婷一区二区三区av免费看| 天堂中文av在线| 国产激情在线看| 成人一区二区免费视频| 国产亚洲综合视频| 欧美成人黄色网址| 天堂网成人在线| 国产va亚洲va在线va| 国产精品wwwww| 欧美日韩久久婷婷| 欧美午夜性视频| 在线观看免费视频高清游戏推荐| 污污网站免费看| 日韩久久久久久久久久久久| 男人的天堂狠狠干| 在线观看免费av网址| 在线观看av的网址| 无码日韩人妻精品久久蜜桃| 日本在线播放一区二区| 免费日韩在线观看| 手机看片福利日韩| 男女激烈动态图| 成年人网站大全| av动漫在线播放| 国产高潮免费视频| av在线播放天堂| www.色欧美| 欧美v在线观看| 国产一二三四五| 亚欧在线免费观看| 国产肉体ⅹxxx137大胆| 亚洲36d大奶网| 午夜肉伦伦影院| 欧美一区二区视频在线播放| 热久久精品免费视频| 久久久久久久9| 精品国产乱码久久久久久1区二区| 乱人伦xxxx国语对白| 欧美日韩一级在线| 狠狠干狠狠操视频| 日韩精品视频久久| 久久手机在线视频| 一区二区三区四区久久| 在线观看免费黄网站| 99爱视频在线| 免费人成自慰网站| 中文字幕av久久| 青少年xxxxx性开放hg| 羞羞的视频在线| 在线观看免费黄网站| 欧美黄色一级片视频| 日本毛片在线免费观看|