freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

計(jì)算機(jī)專業(yè)英文資料匯總(參考版)

2025-06-20 23:40本頁(yè)面
  

【正文】 ers of the random mittee agree on its labeling. 。 is set to, which indicates an unlabeled example is regardedto be con175。x the size of the random mittee to C = 6, and the con175。erent labeled rates: 10%, 20%, 30% and40%.In the experiments, the randomized base learner ofRocus is instantiated as an AdaBoost[53] precededby a random projector. The random projector 175。. For example, if a training set consisting of 1000 examples and 185。erent software projects in NASAMetrics Data Program[52]. Some of these softwareprojects are developed for satellite 176。c applicationscenario, and hence applicability of Rocus will be better.4 Empirical StudiesWe evaluate the e174。cient labeled data and \data imbalance simultaneously by imposing a \class proportionconstraint over a special type of base learner, which canadjust the portion of labeling of unlabeled data according to the constraint, just as what TSVM[31] does. However, such a strategy may exclude many good candidatebase learners that have good performance over someparticular defect detection problems but fail to adjusttheir labeling according to the constraints. In contrast,by incorporating undersampling, disagreementbasedsemisupervised learning method can be easily adaptedto the exploitation of unlabeled data while the data areimbalanced. Since the requirement of the base learnerin Rocus is no more than the ability of injecting randomness, which can be easily achieved, we may choosedi174。t of H161。nement,undersampling is employed to tailor the newly labeledset such that its minoritymajority ratio is roughly 176。(x) according to (4)萬(wàn)方數(shù)據(jù)Yuan Jiang et al.: Software Defect Detection with Rocus 333of classi175。9. If (3) holds, retrain hi from L [ L0i。t8. Undersample L0i。dence exceeds threshold 181。i on L6. Label all the unlabeled examples with H161。95. Estimate error ei。 : : : 。9 until none of the classi175。 : : : 。,the number of individual classi175。dence threshold 181。ned using thenewly labeled examples selected by H161。er. In each semisupervisedlearning iteration, each classi175。rstly construct C classi175。ers. The dimensionalityof the new space is usually smaller than the original onein order to achieve further diversity between di174。cally, we project the data onto aset of randomly generated unit vectors and construct aclassi175。erent.Here, we call the ensemble of randomized classi175。ers moresimilar. Following the suggestion of Li and Zhou[37],we inject certain amount of randomness into the baselearner such that even if newly labeled examples aresimilar, the learned classi175。ers would be similar. Re175。ers can be improvedthrough the semisupervised learning process, the performance of the ensemble may not be improved or evendegrade due to the rapid decrease of diversity. Thereason is that the \teachers of two individual classi175。1 otherwise.As pointed out by Li and Zhou[37], such a majorityteachone style process may gradually reduce the diversity between individual classi175。(4)where s(yjh(x)) 2 [0。1。1jhi(x)) 。(x) =8:+1。erhi using the minoritymajority ratio of current trainingexamples ri and then bine them for 175。nement may be slightly imbalanced. In order to pensate for the e174。t from balancing the training datamight be counteracted by discarding many reliably labeled data. However, in this case, the augmented training set for classi175。1. To make (3) hold again, werandomly discard some examples of both the minorityclass and majorityclass according to 176。t may bemuch greater than Wi。1Wi。1Wi。t^ei。t161。 1)th round, we have that ui。1 in the succeeded rounds, the performanceof hi will be improved after re175。t ui。i。t is inverseproportional to the square ofthe worse case classi175。i。t)(1 161。2i。t 180。i。t, the (weighted)number of examples in the augmented training set(W0 +Wi。s worstcase error 178。 Technol., Mar. 2011, , of classi175。ne the utility function of the re175。tW0 +Wi。t =^ei。ning hi in tthiteration can be estimated by180。ers excludinghi. Assume that the noise rate of original labeled setL is very small, and hence the noise rate in the augmented training set, ., L [ L0i。t denote the estimatederror rate of H161。t denote the total weights of examples inL and L0i。sworstcase error and noise rate has been studied in [49],and has been applied to deriving the stopping criterionfor some disagreementbased semisupervised learningmethods[3738。er. The relationship between the classi175。er re175。cations of unlabeled examples can greatly increase thenoise rate of the newly labeled sets. Learning on noisydataset may humble the performance of resulting classi175。esthe expected ratio between the minorityclass and themajorityclass and is usually 175。. Here, 176。cally, let ~Lit denote the newly labeledset in the tth round, where the total weights of theminorityclass examples in ~Lit are pit。nement with these newly labeledexamples. Since U itself is imbalanced, hi would stilllose its sensitivity to the minorityclass after learning many newlylabeled majorityclass examples in U.Here, we employ undersampling[11], an e177。ers can provide accurate prediction for each selected example, thesensitivity of the current classi175。nement. To unify the representation,the weight of a labeled example is 175。dently labeled examples will be reduced duringthe classi175。nement. Inspired by [27], we associate a weight (between0 and 1) with each unlabeled example according to itslabeling con175。dence is greater than that of a preset threshold 181。dence using the degree of agreement on the currentlabeling among these classi175。1 individual classi175。er hi. Given an unlabeled example, we 175。 hCg are responsible for selecting con175。 hi+1。 : : :,hi161。ers H161。ers selectsome examples in U to label according to a disagreement level, and then teach the other classi175。 h2。ers ar
點(diǎn)擊復(fù)制文檔內(nèi)容
環(huán)評(píng)公示相關(guān)推薦
文庫(kù)吧 www.dybbs8.com
備案圖鄂ICP備17016276號(hào)-1