freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

計(jì)算機(jī)專業(yè)畢業(yè)設(shè)計(jì)文獻(xiàn)翻譯-其他專業(yè)(已修改)

2025-02-04 02:24 本頁(yè)面
 

【正文】 第六章 大尺寸問(wèn)題 在這一章節(jié)我們討論一些方法 ,為了解決 LSSVM 的大數(shù)據(jù)設(shè)定中的方法設(shè)計(jì)和歸類問(wèn)題 . We explain Nystom method as proposed in the context of Gaussian processes and inplete Cholesky factorization for low rank approximation. Then a new technique of fixed size LSSVM is presented. In this fixed size LSSVM method one solves the primal problem instead of the dual, after estimating the map to the feature space B based upon the eigenfunctions obtained form kernel PCA, which is explained in more detail in the next Chapter. This method gives explicit links between function estimation and density estimation, exploits the primaldual formulations, and addresses the problem of how to actively select suitable support vectors instead of taking random points as in the Nystrom method. Next we explain methods that aim at constructing a suitable basis in the feature space. Furthermore, approaches for bining submodels are discussed such as mittee works and nonlinear and multilayer extensions of this approach. Low rank approximation methods Nystrom method Suppose one takes a linear kernel. We already mentioned that one can in fact equally well solve then the primal problem as the dual problem. In fact solving the primal problem is more advantageous for larger data sets wile solving the dual problem is more suitable for large dimensional input For linear support vector machines the dual problem is suitable for solving problems with large dimensional input spaces while the primal problem is convenient twords large data sets. However, for nonlinear SVMs one has no expression for B(x), as a result one can only solve the dual problem in terms of the related kernel function. In the method of Fixed Size LSSVM the Nystrom method is used to estimate eigenfunctions. After obtaining estimates for B(x) and linking primaldual formulations, the putation of W, B is done in the primal space. spaces because the unknowns are ERn and ERN, respectively, where n denotes the dimension for the input space and N the number of given training data points. for example in the linear function estimation case one has 公式 () by elimination of the error variables EK, which one can immediately solve. In this case the mapping B bees B(XK) = (XK) and there is no need to solve the dual problem in the support values A, certainly not for large data sets. For the nonlinear case on the other hand the situation is much more plicated. For many choices of the kernel, B (~) may bee infinite dimensional and hence also the W vector. However, one may still try in this case to find meaningful estimates for B (XK). A procedure to find such estimates is implicitly done by the Nystrom method, which is well known in the area of integral equations [14。 63] and has been successfully applied in the context of Gaussian processes by Williams amp。 Seeger in [294]. The method is related to finding a low rank approximation to the given kernel matrix by randomly choosing M rows/columns of the kernel matrix. Let us denote the big kernel matrix by (N, N) and the small kernel matrix based on the random subsample (M, M) with M N (In practice often M《 N). Consider the eigenvalue deposition of the small kernel matrix (M, M) () where B contains the eigenvalues and U the corresponding eigenvectors. This is related to eigenfunctions 1 and eigenvalues 2 of the integral equation( ) 。 as follows () 。 Where M1 and M2 are estimates to M1 and M2 respectively, for the integral equation, and uki denotes the kith entry of the matrix U. This can be understood form sampling the integral by M points x1, x2… , xm. For the big kernel matrix on has the eigenvalue deposition () Furthermore, as explained in [294] one has 。 () 。 One can then show that () where O (N, M) is the N block matrix taken from O (N, N). These insights are used then for solving in an approximate sense the linear system ()without bias term in the model, as considered in Gaussian process regression problems. By applying the ShermanMorrisonWoodbury formula [98] one obtains [294]: () where 2 are calculated from () base upon 2 from the small matrix. In LSSVM classification and regression one usually considers a bias term which leads to centering of the kernel matrix. For application of the Nystrom method the eigenvalue deposition of the centered kernel matrix is the taken. Finally, further characterizations of the error for approximations to a kernel matrix have been investigated in [206。 226]. The Nystrom method approach has been applied to the Bayesian LSSVM framework at the second level of inference while solving the level 1 problems without Nystrom method approximation by the conjugate gradient method [273]. In Table this is illustrated on three data sets cra( leptograpsus crab), rsy(Ripley synthetic data), hea (heart disease) according to [273]. In [273] it has also been illustrated that larger data sets such as the UCI adult data set, a successful approximation by the Nystrom method can be made at the second level of inference by taking a subsample of 100 data points in order to determine (R, O) instead of the whole training data set size 33000. Inplete Cholesky factorization The ShermanMorrisonWoodbury formula which is used within the Mystrom method has also been widely used in the context of interior point algorithms for linear programming. However, as pointed out by Fine amp。 Scheinberg in [79] it may lead to numerical difficulties. In [79] it has been illustrated with a simple example how the puted solutions may not even be close to the correct solution when applying the Woodbury formula (when the limited machine precision is taken into account). Small numerical perturbations due to limited machine precision may cause numerical instabilities resulting into puted solutions that are far form the true solution. Table application
點(diǎn)擊復(fù)制文檔內(nèi)容
醫(yī)療健康相關(guān)推薦
文庫(kù)吧 www.dybbs8.com
公安備案圖鄂ICP備17016276號(hào)-1