英语论文
原创论文
留学生作业
英语论文格式
免费论文
essay
英国硕士论文
英国毕业论文
英语论文
留学生论文
澳大利亚论文
新西兰论文
澳洲Report
澳洲留学生论文
美国留学论文
Dissertation
美国硕博论文
essay case
Eassy
Term paper
英语毕业论文
英文论文
课程作业
德语论文
德语专业论文
德语本科论文
德国留学论文
Assignment
日语论文
韩语论文
法语论文
俄语论文

Dynamically Determining Neighborhood Parameter for Locally L

时间:2022-01-02 来源:未知 编辑:梦想论文 阅读:
Abstract:
Locally linear embedding is a kind of very competitive nonlinear dimensionality reduction with good representational capacity for a broader range of manifolds and high computational efficiency. However, they are based on the assumption that the whole data manifolds are evenly distributed so that they determine the neighborhood for all points with the same neighborhood size. Accordingly, they fail to nicely deal with most real problems that are unevenly distributed. This paper presents a new approach that takes the general conceptual framework of Hessian locally linear embedding so as to guarantee its correctness in the setting of local isometry to an open connected subset but dynamically determines the local neighborhood size for each point. This approach estimates the approximate geodesic distance between any two points by the shortest path in the local neighborhood graph, and then determines the neighborhood size for each point by using the relationship between its local estimated geodesic distance matrix and local Euclidean distance matrix. This approach has clear geometry intuition as well as the better performance and stability to deal with the sparsely sampled or noise contaminated data sets that are often unevenly distributed. The conducted experiments on benchmark data sets validate the proposed approach.

Key words:  manifold learning; Hessian locally linear embedding; neighborhood size; dimensionality reduction

摘  要:
局部线性嵌入是最有竞争力的非线性降维方法,有较强的表达能力和计算优势.但它们都采用全局一致的邻域大小,只适用于均匀分布的流形,无法处理现实中大量存在的非均匀分布流形.为此,提出一种邻域大小动态确定的新局部线性嵌入方法.它采用 Hessian 局部线性嵌入的概念框架,但用每个点的局部邻域估计此邻域内任意点之间的近似测地距离,然后根据近似测地距离与欧氏距离之间的关系动态确定该点的邻域大小,并以此邻域大小构造新的局部邻域.算法几何意义清晰,在观察数据稀疏和数据带噪音等情况下,都比现有算法有更强的鲁棒性.标准数据集上的实验结果验证了所提方法的有效性.
 
关键词:  流形学习;Hessian 局部线性嵌入;邻域大小;降维
 
1. Production about the question

Many high-dimensional data, such as remote sensing, climate, etc. are often located in low-dimensional manifolds, since the "Science" published in 2000, the most representative method ISOMAP (isometric feature mapping) and LLE (locally linear embedding) since[1,2], has become a recent research hot spot which is looking for such a low-dimensional manifold description parameter space. ISOMAP dimension reduction process obtains the global optimum geometry with good results by calculating the geodesic distance between pairs of points, and using MDS(Multi-dimensional scaling) method, which has developed a lot of improvement algorithms, such as kernel-based methods ISOMAP, supervision ISOMAP [3], incremental ISOMAP [4] etc.. LLE descending dimensional embedding process is maintaining the local geometry with no change, and to avoid local minima. And, ultimately, there is a global low-dimensional embedded system with good the effect. Current transform algorithm include the use of the Hessian to improved algorithm HLLE (Hessian LLE) [5], the use of data classified information to improve oversight LLE, the incremental LLE [6], the Fisher improved LLE [7], etc. Currently, the country also launched a more in-depth theoretical study and practical application [8]. For example, existence proof [9] about ISOMAP manifold with continuous and low-dimensional parameter space isometric mappings. We find about data link [10] between the high-dimensional observations data with low-dimensional parameter space according to the direction of extension and the amplification factor. The basic ISOMAP assumption is that the global isometric mapping and convex parameter space, which in many cases is difficult to meet; but HLLE requires only partial isometric map and open the communication parameter space, a wider range of applications, but the same as the ISOMAP are greatly dependent on if the local neighborhood correctly reflects the internal structure of the manifold. Existing determining method for k-nearest neighbor is prone to distort the neighbor structure of noise and sparse data, which may result in short circuit [11]. The so-called short-circuit is manifold folded surface in close proximity, making certain points neighborhood from different folded surface, which is not manifold nearest neighbor. That often leads to significant variations in performance, and requiring neighborhood optimization. Neighborhood optimization methods include drawing a minimum spanning tree repeatedly from fully connected graph to construct connected neighborhood graph method [12] to ensure that no relative position between data is lost after dimensionality reduction. Using data classification information redefines the distance. This method is using the new distance to specify neighborhood [3]. The disadvantage of this method is that it makes no sense to information without classification. Currently, there is research [13-15] on how selecting the optimum size of neighborhood via residuals and re-construct the linear coefficient, once confirmed the size of every neighborhood is the same among each point. Another method is choosing initial neighborhood and then use PCA (principal component analysis) to construct mainline subspace of this neighborhood, deleting points in neighborhood deviating from the main line subspace [16]. When the neighborhood is essentially a non-linear, this method may not be applicable, at the same time, too many parameters makes it more difficult to apply.
 
Our previous work focuses on the use of clustering techniques to automatic data clustering, and then uses a supervised approach to improve the neighborhood [17], the use of Figure algebraic on optimization of neighborhood [18], etc., but the neighborhood size is still the global unity. Taking into account that HLLE needs to maintain local area linearization, when the data manifold is non-uniform distribution, the use of a unified global neighborhood size is difficult to meet. Because if neighborhood parameters to obtain are too large, they are easy to remove the small-scale structure of the manifold, and inevitably faced with short-circuit problem. On the contrary, it can easily lead to splitting manifolds [19]. Therefore, we proposed a non-uniform distribution of the whole recursive decomposition of the manifold approximate uniform distribution sub manifold and automatically calculated for each sub-manifold neighborhood size, thereby improving LLE [20]. But it is necessary to calculate all the geodesic distance between points, and the time complexity is too high, which is close to O (| X | 3). And LLE performance is less than HLLE. Therefore, this article is only t calculate geodesic distance for each point and its vicinity approximation between the points, and use it to determine the point size of the neighborhood. And then propose Hessian locally linear embedding algorithm VK-HLLE (variable k Hessian locally linear embedding) which is neighborhood size dynamically changing, not only significantly improve the performance, but also did not increase the time complexity.
 
2 Hessian Locally Linear Embedding
Assume that there is a parameter space Θ ⊂ Rd and a smooth map φ: Θ → Rn, where embedding space Rn satisfy n> d, called M = φ (Θ) is manifold, manifold learning purposes is based on the observation data to determine parameter space Θ.ISOMAP using isometric feature mapping (isometric feature mapping) to achieve manifold learning, the basic assumptions are: ① Isometric: Geodesic distance is invariant under the isometric Embedding mapping. Geodesic distance of any point on manifold under equidistant embedding conversion to obtain Euclidean space remains: G (x, y) = | α-β |, x, y ∈ M, and α, β ∈ Θ; ② Convexity: the parameter space Θ is convex. For any α, β ∈ Θ, segment {(1-t) α tβ: t ∈ (0,1)} still belongs Θ.ISOMAP uses isometric embedding invariance to construct measure geodesic distance under absence of any knowledge on the observational data. Approach is to assume that when two points are very close, geodesic distance is equal to the Euclidean distance. While the geodesic distance between distant points is achieved under neighbor points accumulated. When a observations set is sufficiently dense and the intrinsic parameter space is convex, the ISOMAP parameter space can be successfully obtained. Problems faced with ISOMAP are that in many cases, the parameter space is not convex, in which case we cannot get the correct results.
 
HLLE using local linear achieves manifold learning. It requires only a subset of the local isometric is open and connected, but not necessarily convex. Its theoretical is basis on Hessian transform of manifold tangent space. Assumes manifold M ⊂ Rn is smooth. At any point m ∈ M, it has the tangent space Γm (M). Introducing in this space Euclidean inner product space can create a local coordinate system, and there is the origin O ∈ Γm (M ). Let Nm is the neighborhood of m ∈ M, for any m '∈ Nm has a unique nearest point v' ∈ Γm (M), so the mapping m '→ v' is smooth. Therefore, Nm has local coordinate system. Let f: M → R is near m of C2 in smooth, g: U → R, U ⊂ Rd is a neighborhood of zero O. Leaves g (x) = f (m), because m '→ x is smooth, g is a C2 smooth, then the Hessian transform of f is:

 
Quadratic defines the average bending rate of f: M → R on M, where
| |  | |2F is Frobenius norm. It can be shown that if the parameter space Θ is open connected subset, then H (f) has a (d +1) dimensional null space. Parameter space can calculate a null-space coordinates to find the appropriate base, which is the core of HLLE. It can be: ① The frame of HLLE is consistent with the frame of LLE; ② HLLE needs to calculate the partial derivative of n × n times for each data point. When viewing data dimension is very high, the calculation is not small; ③ Requirement neighborhood of each point is linear, when the neighborhood is highly curved. And it is very easy to have threats faced circuit. So, this is a problem to be solved in this article.
 
3 Neighborhood parameters dynamically determined
Existing locally linear embedding algorithm uses a unified global neighborhood parameters, cannot handle a lot of non-uniform manifold in reality. According the core idea of Locally Linear Embedding, as long as the neighborhood points are in a linear space, then the points in the neighborhood should be as more as possible, that is, to take a large neighborhood parameters, such as point y in Figure (1). Conversely, when the different folding surface of the manifold is in close proximity, there is need to use a smaller neighborhood parameters, as shown in 1 (b) of the point x, otherwise it will produce short-circuit problem, as shown in 1 (a) of the point x. Because u is not the nearest neighbor on x manifold, and v is closer than u. obviously, when the data manifold is non-uniform distribution, the above two cases are contradictory. The only solution is based on the structure of the manifold to dynamically determine. Figure 1 (b) shows the data points located on extremely curved manifolds, such as point x, takes a smaller neighborhood parameters, otherwise takes a larger neighborhood parameters, such as point y. Therefore, the key is how to determine the data manifold camber and with its neighborhood calculated relationships.
 

 
Our approach is the use of geodesic distance and Euclidean distance to dynamically determine the size of the neighborhood of each point, its geometric meaning shown in Figure 2. The geodesic distance of A and B is lAB, the curve AEB length, the Euclidean distance of A and B is dAB, the length of the line AB. It is obvious that, dAB / lAB <dCD / lCD, and the curve AEB curvature ratio is larger than curve CFD. Therefore, the smaller is the ratio of Euclidean distance and Geodesic distance, the more curved the manifold local between these two points, neighborhood parameters should be taken of the smaller; vice versa. This method of determining neighborhood parameters needs to calculate the geodesic distance. All input data directly calculated of the distance measure is too high in the time complexity, close to O (| X | 3). So, here we change strategy, counting only the approximate geodesic distance of arbitrary point and its vicinity, and use it to determine the size of the neighborhood parameters. Thereby proposed neighborhood parameters dynamically changing Hessian locally linear embedding algorithm VK-HLLE, not only significantly improved performance, but also did not increase the time complexity.
 

Algorithm 1 Determine the neighborhood of each point parameters Compute Nbr Sizes (X, k).
/ * Input, X is a high-dimensional observations, k is the initial size of the neighborhood; Output: xi ∈ X of each point neighborhood parameters ki * /
1)      Calculating via Euclidean distance to any point xi of X on the k-neighborhood, and thus constitutes a k-neighborhood local data sets Xi.
2)      Using ISOMAP method to calculate the local data sets Xi between any two points in a local geodesic distance, including two steps:
• According to Xi and k determine for each point on the k-neighborhood, and then construct the weight diagram G = (V, E). V corresponds to the data Xi, E is connected to V in the two set of edges, (xi, xj ) ∈ E, if xi is on the k-nearest neighbor xj, distance between xi and xj is the Euclidean distance de (xi, xj).
• By seeking G on the shortest distance between any two points to estimate Xi manifold formed locally on all points on the geodesic distance between dg (xi, xj). First of all (x­i, xj) ∈ E so that dg (x­i, xj) = de (xi, xj); otherwise, let dg (xi, xj) = ∞. Then use all t, iterative calculations all dg (xi, xj) = min {dg (xi, xj), dg (xi, xt) dg (xt, xj)}.
3)      Calculating the ratio of the sum of Euclidean distance and the sum of local geodetic distance of all the points in Xi and as xi ∈ Xi in its local data measure:
4)      Calculating neighborhood parameters of xi, the basic idea is a mean value of all data points λi, and k is to be taken neighborhood parameters. Other data points should be adjusted to the center k:
 
Step 1 of algorithm used initial Euclidean distance de to determine the initial neighborhood, as the same to all locally linear embedding algorithm. Step 2 counts only k neighborhood geodesic distance between points and k is a constant, the time complexity is O (N). The time complexity of step 3 and step 4, are o (N). Accordingly, the algorithm increases the time complexity is o (N). Neighborhood parameters’ data dynamically changing optimize the neighborhood structure, thereby speeding up the subsequent embedding process. Therefore it does not increase the overall time complexity, which can be obtained results from rear experimental. In addition, the initial neighborhood size value of k in algorithm may affect the local geodetic distance estimates. If made ​​too small, it is prone to disconnect neighborhood graph, leading to local geodesic distance estimation bias. Thus it will make the local neighborhood parameter calculation is not accurate, but we can use the method of literature [12] to construct connectivity of the neighborhood graph, so as to ensure in any case the algorithm can be run successfully. According to Compute Nbr Sizes (X, k) algorithm, we can conclude that the improved algorithm HLLE VK-HLLE Algorithm to maintain clarity. We give a complete VK-HLLE algorithm is as follows:
 
Algorithm 2 VK-HLEE (X, k, d).
/ * Input, X is a high-dimensional observations, k is the initial neighborhood size, d is a low dimensional parameter space dimension; output is a low dimensional parameter space W * /
1) Using Compute Nbr Sizes (X, k) to calculate for neighborhood parameters ki of each point xi, ki, and according to the Euclidean distance and ki to correct k-neighborhood of xi to be ki-neighborhood. Then ki-neighborhood expresses as the center of the neighborhood row vector ki × n matrix Mi.
2) Using the singular value to decompose each neighborhood matrix Mi, orthogonal to the first vector V d components as its tangent space.
3) Find the tangent space of the Hessian matrix. When d = 2, according to the cut points in space to form the following matrix: Xi = [1 V., 1 V., 2 (V., 1) 2 (V., 2) 2 (V., 1 × V., 2)], where, V., 1 means that the first dimension value of all points in the tangent space. As for d> 2, using the same method to create 1+d+d +(d+1) / 2 of the matrix, and then use the Gram-Schmidt orthogonalization Xi to generate a new orthogonal matrix. And after taking the final transpose d+ (d+1) / 2 columns make up the Hessian matrix Hi.
4) Quadratic structure        to characterize of H = (Hij), for which (d +1) corresponding to the smallest eigenvalue of (d +1)-dimensional subspace, the first characteristic value of 0 corresponds to the constant function. The next d eigenvectors constitute the d-dimensional space, an orthogonal basis of their choice, the conversion can be obtained to restore the parameter space W.
 
VK-HLLE algorithm’s first step is with Compute Nbr Sizes (X, k) to calculate at any point xi and new neighborhood parameters ki. And a new neighborhood, the remaining steps are the same as HLLE. So, complex mathematical derivation and more detailed description of the algorithm can be seen from HLLE text [5]. VK-HLLE and HLLE only need the same N-ki × ki characteristic problem of computing sparse, and ISOMAP need an N × N-intensive problem features to get the answer. If N is large, then the VK-HLLE is more significant superior in time than ISOMAP. Looked from the rest experimental results, VK-HLLE dimensionality reduction performance is better than HLLE, ISOMAP and LLE.
 
4 Experimental Analyses
Experimental means to compare with VK-HLLE and HLLE, LLE and ISOMAP embedding method performance and time complexity. Four kinds of methods are using matlab to realize. HLLE, LLE and ISOMAP use the original author’s matlab code. VK-HLLE is realized by the author. HLLE, LLE and ISOMAP experimental parameters select the original parameters provided by each of them. They should be better arguments that original author has chosen. LLE and HLLE set neighborhood k = 12, and ISOMAP set k = 7. The neighborhood parameters k of VK-HLLE is the same as LLE and HLLE, in order to maintain strict comparability. Experimental data is
Swiss Roll Surface, which is HLLE, LLE, ISOMAP, adopted the standard test data. Following experiments HLLE several methods code is from the Swiss Roll Surface sampling data on the scale of a number of points but its center removed a small rectangular non-convex data.
 
4.1 Performance Analysis
Experiment 1: Compared four methods of parameter space in the data and dense non-convex case performance without noise. We sampled multiple data sets of data scale of 800 points from the Swiss Roll Surface. And then run the four kinds of methods analysis, HLLE in some cases, can make data be more perfectly embedded in two-dimensional space. But it is not stable enough. ISOMAP is stable, but always make removed region strong expansion and the remaining data points are distorted. LLE is the worst in the performance, the results are not correct in the vast majority of cases. However VK-HLLE is stable, and in most cases, the data can be more perfectly embedded in two-dimensional space, the center with a small rectangular removed can be correctly reflected in the two-dimensional space embedded. Seen from Figure 3, it is easy to see, VK-HLLE owns the best performance.
 

Experiment 2: Compared performance of four methods in data sparse case. In reality, a lot of data are difficult to obtain a sufficient number of samples; so many existing algorithms are difficult to deal with. We randomly choose sparse data set of 400 points from Swiss Roll Surface Sampling. Result shown in Figure 4 is clear that, HLLE and LLE are confusing. ISOMAP’s removed region is strong expansion and the remaining data points distorted, and the data can be more perfectly embedded in two-dimensional space in VK-HLLE.
 
Experiment 3: Compared performance of four methods, in the data-intensive but noise case, we randomly choose 800 points from Swiss Roll Surface, and then superimposed with mean 0 and variance of 0.4 Gaussian noises. The results shown in Figure 5 is clear, HLLE and LLE did not perform well. ISOMAP will expand removed region, and distort the rest of the data points. While VK-HLLE is better able to embed the data in two-dimensional space. After several experiments, we also found, VK-HLLE and other methods are subject to under the effect of noise and not stable enough. In some cases, it cannot be properly embedded, because the noise affecting the local geodesic distance estimation, leading to the final embedded bias.

                    Fig.4 Embedding results on sparse data sets

Experiment 4: Analysis the sensitivity of neighborhood size both of HLLE and VK-HLLE. We randomly choose 800 data points from Swiss Roll Surface, and superimposed with mean 0 and variance of 0.3 Gaussian noises. After multiple experiment, we will discover that VK -HLLE has stronger robustness on the size of the neighborhood, less significantly affected than the HLLE Figure 6 is one example.

Four kinds of experiments consistently demonstrated the validity of the VK-HLLE, both in non-convex parameter space, noisy or sparse data, VK-HLLE are consistently superior to HLLE, LLE and ISOMA. What’s more, VK-HLLE has a stronger robustness than HLLE on neighborhood size.
 
4.2   Time Analysis
We choose sequentially 500,1 000,1 000,2 500 points from Swiss Roll Surface. Each kind of samples needs 5 times random selection. LLE, HLLE, ISOMAP and VK-HLLE all use the average time of these 5 samples as time for its own scale.  We can clearly see the data from the table 1. VK-HLLE and HLLE are very close to each other and show trends to decline. And the larger the scale is, the more obvious the phenomenon is. The main reason is the increase in computing neighborhood parameters time is very little, and optimized neighborhood has accelerated the subsequent embedding process [17]. The LLE algorithm performance is relatively poor, but the operation is fastest. While ISOMAP is very sensitive to the size of the data, so the largest increase in time, may not be suitable for large-scale data.

    Table 1 Average running time of the four approaches on the five data sets with the difference size(s)
 
5 Conclusions
Proposed a neighborhood parameters dynamically determine a new locally linear embedding method VK-HLLE. It uses Hessian Locally Linear Embedding conceptual framework, but with the local neighborhood of each point of this neighborhood of any point estimate the approximate geodesic distance between, then the approximate geodetic distance and the Euclidean distance between dynamically determine the size of the neighborhood of the point, and thus to construct a new neighborhood size local neighborhood. Algorithm clear geometric meaning, in reality there are a lot to handle non-uniform distribution manifold, especially in the sparse observational data and observational data with noise. It exists more robust. Moreover, compared with the HLLE, it did not increase the time complexity, and when it refers to large-scale data, there is a decreasing trend; Compared with ISOMAP, the time is the more obvious advantages. VK-HLLE flaw is the same with HLLE; they are sensitive to dimension of the observed data and is inappropriate to observed data with thousands of dimension. There are extra measures should be taken. In addition to, their noise immunity remains to be further improved.
 
References:
[1]   Tenenbaum JB, de Silva V, Langford JC. A global geometric framework for nonlinear dimensionality reduction. Science,
2000,290(5500): 2319−2323.
[2]   Roweis ST, Saul LK. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000,290(5500):2323−2326.
[3]   Geng X, Zhan DC, Zhou ZH. Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans. on
Systems, Man and Cybernetics, 2005,35(6):1098−1107.
[4]   Law MHC, Jain AK. Incremental nonlinear dimensionality reduction by manifold learning. IEEE Trans. on Pattern Analysis and
Machine Intelligence, 2006,28(3):377−391.
[5]   Donoho DL, Grimes C. Hessian eigenmaps: Locally linear embedding, techniques for high-dimensional data. PNAS, 2003,100(10):
5591−5596.
[6]   Kouropteva O, Okun O, Pietikainen M. Incremental locally linear embedding. Pattern Recognition, 2005,38(10):1764−1767.
[7]   de Ridder D, Loog M, Reinders MJT. Local fisher embedding. In: Proc. of the 17th Int’l Conf. on Pattern Recognition, Vol.2. 2004.
295−298. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1334176
[8]   Xu R, Jiang F, Yao HX. Overview of manifold learning. CAAI Trans. on Intelligent Systems, 2006,1(1):44−51 (in Chinese with
English abstract).
[9]   Zhao LW, Luo SW, Zhao YC, Liu YH. Study on the low-dimensional embedding and the embedding dimensionality of manifold of
high-dimensional data. Journal of Software, 2005,16(8):1423−1430 (in Chinese with English abstract). http://www.jos.org.cn/1000-
9825/16/1423.htm
[10]   He L, Zhang JP, Zhou ZH. Investigating manifold learning algorithms based on magnification factors and principal spread
directions. Chinese Journal of Computers, 2005,28(12):2000−2009 (in Chinese with English abstract).
[11]   Balasubramanian M, Schwartz EL. The ISOMAP algorithm and topological stability. Science, 2002,295(5552):7.
[12]   Yang L. Building k edge-disjoint spanning trees of minimum total length for isometric data embedding. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 2005,27(10):1680−1683.
[13]   Saxena A, Gupta A, Mukerjee A. Non-Linear dimensionality reduction by locally linear ISOMAPs. LNCS 3316, Springer-Verlag,
2004. 1038−1043. http://www.springerlink.com/content/14754g3k7gp4lt2y/
[14]   Kouropteva O, Okun O, Pietikainen M. Selection of the optimal parameter value for the locally linear embedding algorithm. In:
Proc. of the 1st Int’l Conf. on Fuzzy Systems and Knowledge Discovery. Singapore, 2002. 359−363. http://citeseer.ist.psu.edu/
kouropteva02selection.html
[15]   Samko O, Marshall AD, Rosin PL. Selection of the optimal parameter value for the ISOMAP algorithm. Pattern Recognition
Letters, 2006,27(9):968−979.
[16]   Hou YX, Wu JY, He PL. Locally adaptive non linear dimensionality reduction. Computer Applications, 2006,26(4):895−897 (in
Chinese with English abstract).
[17]   Wen GH, Jiang LJ, Wen J, Shadbolt NR. Clustering-Based nonlinear dimensionality reduction on manifold. LNAI 4099,
Springer-Verlag, 2006. 444−453. http://www.springerlink.com/content/k036115h1w031077/
[18]   Wen GH, Jiang LJ, Shadbolt NR. Using graph algebra to optimize neighborhood for isometric mapping. In: Proc. of the 20th Int’l
Joint Conf. on Artificial Intelligence (IJCAI 2007). 2007. 2398−2403. http://www.ijcai.org/papers07/Papers/IJCAI07-386.pdf
[19]   Yang L. Distance-Preserving projection of high-dimensional data for nonlinear dimensionality reduction. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 2004,26(9):1243−1247.
[20]   Wen GH, Jiang LJ, Wen J, Shadbolt NR. Performing locally linear embedding with adaptable neighborhood size on manifold.
LNAI 4099, Springer-Verlag, 2006. 985−989. http://www.springerlink.com/content/958007m6vn1920l7/
分享到:
------分隔线----------------------------
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
最新评论
随机推荐原创论文