From 43d2188068dd1d89c5e27b04faf46d41920d730d Mon Sep 17 00:00:00 2001 From: Nuno Miguel Nobre Date: Wed, 14 Jun 2023 21:53:16 +0100 Subject: [PATCH] Fix typos in the user manual for the dD spatial searching pkg --- .../doc/Spatial_searching/Spatial_searching.txt | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/Spatial_searching/doc/Spatial_searching/Spatial_searching.txt b/Spatial_searching/doc/Spatial_searching/Spatial_searching.txt index b935e352414..afa3a3eb0ca 100644 --- a/Spatial_searching/doc/Spatial_searching/Spatial_searching.txt +++ b/Spatial_searching/doc/Spatial_searching/Spatial_searching.txt @@ -62,7 +62,7 @@ computation has to be re-invoked for a larger number of neighbors, thereby performing redundant computations. Therefore, Hjaltason and Samet \cgalCite{hs-rsd-95} introduced incremental nearest neighbor searching in the sense that having obtained the `k` nearest -neighbors, the `k + 1`st neighbor can be obtained without having +neighbors, the `k + 1`th neighbor can be obtained without having to calculate the `k + 1` nearest neighbor from scratch. Spatial searching typically consists of a preprocessing phase and a @@ -400,7 +400,7 @@ splitting rule, needed to set the maximal allowed bucket size. This example program has two 2-dimensional data sets: The first one containing collinear points with exponential increasing distances and the second -one with collinear points in the firstdimension and one point with a distance +one with collinear points in the first dimension and one point with a distance exceeding the spread of the other points in the second dimension. These are the worst cases for the midpoint/median rules and can also occur in higher dimensions. @@ -426,13 +426,13 @@ how to perform parallel queries: \section Performance Performance -\subsection OrthogonalPerfomance Performance of the Orthogonal Search +\subsection OrthogonalPerformance Performance of the Orthogonal Search We took the gargoyle data set (Surface) from aim\@shape, and generated the same number of random points in the bbox of the gargoyle (Random). We then consider three scenarios as data/queries. The data set contains 800K points. For each query point we compute the K=10,20,30 closest points, with the default splitter and for the bucket size 10 (default) and 20. -The results were produced with the release 5.1 of \cgal, on an Intel i7 2.3 Ghz +The results were produced with the release 5.1 of \cgal, on an Intel i7 2.3 GHz laptop with 16 GB RAM, compiled with CLang++ 6 with the O3 option. The values are the average of ten tests each. We show timings in seconds for both the building of the tree and the queries.