freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

cs276ainformationretrieval-資料下載頁

2024-10-24 17:52本頁面

【導(dǎo)讀】Resultssummaries. Benchmarks. Precisionandrecall. Averageprecision:. Improvingresults. t. Relevancefeedback. Thepletelandscape. Globalmethods. Queryexpansion. Thesauri. Localmethods. Relevancefeedback. Userissuesa(short,simple)query. non-relevant.iterations.whenyoudon?Imagesearchengine. Wanttomaximizesim(Q,Cr)-sim(Q,Cnr). Unrealistic:wedon?11. Usedinpractice:. documents,wewantahigherβ/γ.mdDdDqq??11. 0???x. feedback(so,set?<?;.?=0).

  

【正文】 ce Thesaurus ? Simplest way to pute one is based on termterm similarities in C = AAT where A is termdocument matrix. ? wi,j = (normalized) weighted count (ti , dj) ti dj n m With integer counts – what do you get for a boolean Cooccurrence matrix? Automatic Thesaurus Generation Example Automatic Thesaurus Generation Discussion ? Quality of associations is usually a problem. ? Term ambiguity may introduce irrelevant statistically correlated terms. ? “Apple puter” ? “Apple red fruit puter” ? Problems: ? False positives: Words deemed similar that are not ? False negatives: Words deemed dissimilar that are similar ? Since terms are highly correlated anyway, expansion may not retrieve many additional documents. Query Expansion: Summary ? Query expansion is often effective in increasing recall. ? Not always with general thesauri ? Fairly successful for subjectspecific collections ? In most cases, precision is decreased, often significantly. ? Overall, not as useful as relevance feedback。 may be as good as pseudorelevance feedback Pseudo Relevance Feedback ? Automatic local analysis ? Pseudo relevance feedback attempts to automate the manual part of relevance feedback. ? Retrieve an initial set of relevant documents. ? Assume that top m ranked documents are relevant. ? Do relevance feedback ? Mostly works (perhaps better than global analysis!) ? Found to improve performance in TREC adhoc task ? Danger of query drift Pseudo relevance feedback: Cornell SMART at TREC 4 ? Results show number of relevant documents out of top 100 for 50 queries (so out of 5000) ? Results contrast two length normalization schemes (L vs. l), and pseudo relevance feedback (adding 20 terms) ? 3210 ? 3634 ? 3709 ? 4350 Indirect relevance feedback [Forward pointer to CS 276B] ? DirectHit introduced a form of indirect relevance feedback. ? DirectHit ranked documents higher that users look at more often. ? Global: Not user or query specific. Resources MG Ch. MIR Ch. – Yonggang Qiu , HansPeter Frei, Concept based query expansion. SIGIR 16: 161–169, 1993. Schuetze: Automatic Word Sense Discrimination, Computational Linguistics, 1998. Singhal, Mitra, Buckley: Learning routing queries in a query zone, ACM SIGIR, 1997. Buckley, Singhal, Mitra, Salton, New retrieval approaches using SMART: TREC4, NIST, 1996. Gerard Salton and Chris Buckley. Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science, 41(4):288297, 1990. Harman, D. (1992): Relevance feedback revisited. SIGIR 15: 110 Xu, J., Croft, . (1996): Query Expansion Using Local and Global Document Analysis, in SIGIR 19: 411
點擊復(fù)制文檔內(nèi)容
教學(xué)課件相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖鄂ICP備17016276號-1