mirror of https://github.com/CGAL/cgal
issue #7454 Consistency of BigO notations
Create `cgalBigO` marco and used it. (`The macro `cgalBigOLarge` is for special situations where we need bigger round brackets)
This commit is contained in:
parent
f7a78677fc
commit
b3af96caa1
|
|
@ -143,7 +143,7 @@ namespace CGAL {
|
|||
/// An explicit call to `build()` must be made to ensure that the next call to
|
||||
/// a query function will not trigger the construction of the data structure.
|
||||
/// A call to `AABBTraits::set_shared_data(t...)` is made using the internally stored traits.
|
||||
/// This procedure has a complexity of \f$O(n log(n))\f$, where \f$n\f$ is the number of
|
||||
/// This procedure has a complexity of \cgalBigO{n log(n)}, where \f$n\f$ is the number of
|
||||
/// primitives of the tree.
|
||||
template<typename ... T>
|
||||
void build(T&& ...);
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ use binary search.
|
|||
`Alpha_shape_2::number_of_solid_components()` performs a graph traversal and takes time
|
||||
linear in the number of faces of the underlying triangulation.
|
||||
`Alpha_shape_2::find_optimal_alpha()` uses binary search and takes time
|
||||
\f$ O(n \log n)\f$, where \f$ n\f$ is the number of points.
|
||||
\cgalBigO{n \log n}, where \f$ n\f$ is the number of points.
|
||||
|
||||
*/
|
||||
template< typename Dt, typename ExactAlphaComparisonTag >
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ use binary search.
|
|||
`Alpha_shape_3::number_of_solid_components()` performs a graph traversal and takes time
|
||||
linear in the number of cells of the underlying triangulation.
|
||||
`Alpha_shape_3::find_optimal_alpha()` uses binary search and takes time
|
||||
\f$ O(n \log n)\f$, where \f$ n\f$ is the number of points.
|
||||
\cgalBigO{n \log n}, where \f$ n\f$ is the number of points.
|
||||
|
||||
*/
|
||||
template< typename Dt, typename ExactAlphaComparisonTag >
|
||||
|
|
|
|||
|
|
@ -1223,10 +1223,10 @@ halfedge \f$e_{\mathrm{pred}}\f$ directed toward \f$v\f$, such that
|
|||
\f$c\f$ is located between the curves associated with
|
||||
\f$e_{\mathrm{pred}}\f$ and the next halfedge in the clockwise order
|
||||
in the circular list of halfedges around \f$v\f$; see
|
||||
\cgalFigureRef{aos_fig-insert}. This search may take \f$O(d)\f$ time,
|
||||
\cgalFigureRef{aos_fig-insert}. This search may take \cgalBigO{d} time,
|
||||
where \f$d\f$ is the degree of the vertex \f$v\f$. \cgalFootnote{We
|
||||
can store the handles to the halfedges incident to \f$v\f$ in an efficient
|
||||
search structure to obtain \f$O(\log d)\f$ access time. However, as
|
||||
search structure to obtain \cgalBigO{\log d} access time. However, as
|
||||
\f$d\f$ is usually very small, this may lead to a waste of storage
|
||||
space without a meaningful improvement in running time in practice.}
|
||||
However, if the halfedge \f$e_{\mathrm{pred}}\f$ is known in advance,
|
||||
|
|
@ -1488,9 +1488,9 @@ keep up-to-date as this arrangement changes.
|
|||
As mentioned above, the triangulation strategy is provided only for
|
||||
educational purposes, and thus we do not elaborate on this strategy.
|
||||
The data structure needed by the landmark and the trapezoidal map RIC
|
||||
strategies can be constructed in \f$O(N \log N)\f$ time, where \f$N\f$
|
||||
strategies can be constructed in \cgalBigO{N \log N} time, where \f$N\f$
|
||||
is the overall number of edges in the arrangement, but the constant
|
||||
hidden in the \f$O()\f$ notation for the trapezoidal map RIC strategy
|
||||
hidden in the \cgalBigO{ } notation for the trapezoidal map RIC strategy
|
||||
is much larger. Thus, construction needed by the landmark algorithm is
|
||||
in practice significantly faster than the construction needed by the
|
||||
trapezoidal map RIC strategy. In addition, although both resulting
|
||||
|
|
@ -1647,7 +1647,7 @@ Section \ref arr_ssecpl. The output pairs are sorted in increasing
|
|||
\f$xy\f$-lexicographical order of the query point.
|
||||
|
||||
The batched point-location operation is carried out by sweeping the
|
||||
arrangement. Thus, it takes \f$O((m+N)\log{(m+N)})\f$ time, where
|
||||
arrangement. Thus, it takes \cgalBigO{(m+N)\log{(m+N)}} time, where
|
||||
\f$N\f$ is the number of edges in the arrangement. Issuing separate
|
||||
queries exploiting a point-location strategy with logarithmic query
|
||||
time per query, such as the trapezoidal map RIC strategy (see Section
|
||||
|
|
@ -2037,11 +2037,11 @@ so it must be construct from scratch.
|
|||
|
||||
In the first case, we sweep over the input curves, compute their
|
||||
intersection points, and construct the \dcel that represents their
|
||||
arrangement. This process is performed in \f$O\left((n + k)\log
|
||||
n\right)\f$ time, where \f$k\f$ is the total number of intersection
|
||||
arrangement. This process is performed in \cgalBigO{left((n + k)\log
|
||||
n\right} time, where \f$k\f$ is the total number of intersection
|
||||
points. The running time is asymptotically better than the time needed
|
||||
for incremental insertion if the arrangement is relatively sparse
|
||||
(when \f$k\f$ is \f$O(\frac{n^2}{\log n}\f$)), but it is recommended
|
||||
(when \f$k\f$ is \cgalBigO{\frac{n^2}{\log n}}), but it is recommended
|
||||
that this aggregate construction process be used even for dense
|
||||
arrangements, since the plane-sweep algorithm performs fewer geometric
|
||||
operations compared to the incremental insertion algorithms, and hence
|
||||
|
|
@ -4346,7 +4346,7 @@ a point with respect to an \f$x\f$-monotone polyline, we use binary
|
|||
search to locate the relevant segment that contains the point in its
|
||||
\f$x\f$-range. Then, we compute the position of the point with respect
|
||||
to this segment. Thus, operations on \f$x\f$-monotone polylines of
|
||||
size \f$m\f$ typically take \f$O(\log m)\f$ time.
|
||||
size \f$m\f$ typically take \cgalBigO{\log m} time.
|
||||
|
||||
You are free to choose the underlying segment traits class. Your
|
||||
decision could be based, for example, on the number of expected
|
||||
|
|
|
|||
|
|
@ -12,9 +12,9 @@ Seidel \cgalCite{s-sfira-91} (see also [\cgalCite{bkos-cgaa-00} Chapter 6).
|
|||
It subdivides each arrangement face to pseudo-trapezoidal cells, each
|
||||
of constant complexity, and constructs and maintains a linear-size search
|
||||
structure on top of these cells, such that each query can be answered
|
||||
in \f$ O(\log n)\f$ time, where \f$ n\f$ is the complexity of the arrangement.
|
||||
in \cgalBigO{\log n} time, where \f$ n\f$ is the complexity of the arrangement.
|
||||
|
||||
Constructing the search structures takes \f$ O(n \log n)\f$ expected time
|
||||
Constructing the search structures takes \cgalBigO{n \log n} expected time
|
||||
and may require a small number of rebuilds \cgalCite{hkh-iiplgtds-12}. Therefore
|
||||
attaching a trapezoidal point-location object to an existing arrangement
|
||||
may incur some overhead in running times. In addition, the point-location
|
||||
|
|
|
|||
|
|
@ -2419,7 +2419,7 @@ protected:
|
|||
/*! Obtain the index of the subcurve in the polycurve that contains the
|
||||
* point q in its x-range. The function performs a binary search, so if the
|
||||
* point q is in the x-range of the polycurve with n subcurves, the subcurve
|
||||
* containing it can be located in O(log n) operations.
|
||||
* containing it can be located in \cgalBigO{log n} operations.
|
||||
* \param cv The polycurve curve.
|
||||
* \param q The point.
|
||||
* \return An index i such that q is in the x-range of cv[i].
|
||||
|
|
|
|||
|
|
@ -451,10 +451,10 @@ To fix the problem, we modify the weights \f$w_i\f$ as
|
|||
</center>
|
||||
|
||||
After the above normalization, this gives us the precise algorithm to compute Wachspress coordinates
|
||||
but with \f$O(n^2)\f$ performance only. The max speed \f$O(n)\f$ algorithm uses the standard
|
||||
but with \cgalBigO{n^2} performance only. The max speed \cgalBigO{n} algorithm uses the standard
|
||||
weights \f$w_i\f$. Note that mathematically this modification does not change the coordinates. One should
|
||||
be cautious when using the unnormalized Wachspress weights. In that case, you must choose the
|
||||
\f$O(n)\f$ type.
|
||||
\cgalBigO{n} type.
|
||||
|
||||
It is known that for strictly convex polygons the denominator's zero set of the
|
||||
Wachspress coordinates (\f$W^{wp} = 0~\f$) is a curve, which (in many cases) lies quite
|
||||
|
|
@ -507,10 +507,10 @@ To fix the problem, similarly to the previous subsection, we modify the weights
|
|||
</center>
|
||||
|
||||
After the above normalization, this yields the precise algorithm to compute discrete harmonic coordinates
|
||||
but with \f$O(n^2)\f$ performance only. The max speed \f$O(n)\f$ algorithm uses the standard
|
||||
but with \cgalBigO{n^2} performance only. The max speed \cgalBigO{n} algorithm uses the standard
|
||||
weights \f$w_i\f$. Again, mathematically this modification does not change the coordinates,
|
||||
one should be cautious when using the unnormalized discrete harmonic weights. In that case,
|
||||
you must choose the \f$O(n)\f$ type.
|
||||
you must choose the \cgalBigO{n} type.
|
||||
|
||||
\b Warning: as for Wachspress coordinates, we do not recommend using discrete harmonic coordinates
|
||||
for exterior points, because the curve \f$W^{dh} = 0\f$ may have several components,
|
||||
|
|
@ -563,7 +563,7 @@ After the normalization of these weights as before
|
|||
\f$b_i = \frac{w_i}{W^{mv}}\qquad\f$ with \f$\qquad W^{mv} = \sum_{j=1}^n w_j\f$
|
||||
</center>
|
||||
|
||||
we obtain the max precision \f$O(n^2)\f$ algorithm. The max speed \f$O(n)\f$ algorithm computes the
|
||||
we obtain the max precision \cgalBigO{n^2} algorithm. The max speed \cgalBigO{n} algorithm computes the
|
||||
weights \f$w_i\f$ using the pseudocode from <a href="https://www.inf.usi.ch/hormann/nsfworkshop/presentations/Hormann.pdf">here</a>.
|
||||
These weights
|
||||
|
||||
|
|
@ -575,7 +575,7 @@ with \f$\qquad t_i = \frac{\text{det}(d_i, d_{i+1})}{r_ir_{i+1} + d_id_{i+1}}\f$
|
|||
are also normalized. Note that they are unstable if a query point is closer than \f$\approx 1.0e-10\f$
|
||||
to the polygon boundary, similarly to Wachspress and discrete harmonic coordinates and
|
||||
one should be cautious when using the unnormalized mean value weights. In that case, you must choose the
|
||||
\f$O(n)\f$ type.
|
||||
\cgalBigO{n} type.
|
||||
|
||||
|
||||
\anchor compute_hm_coord
|
||||
|
|
@ -654,17 +654,17 @@ The resulting timings for all closed-form coordinates can be found in the figure
|
|||
|
||||
\cgalFigureBegin{analytic_timings, analytic_timings.png}
|
||||
Time in seconds to compute \f$n\f$ coordinate values for a polygon with \f$n\f$ vertices
|
||||
at 1 million query points with the max speed \f$O(n)\f$ algorithms (dashed) and
|
||||
at 1 million query points with the max speed \cgalBigO{n} algorithms (dashed) and
|
||||
the max precision \f$0(n^2)\f$ algorithms (solid) for Wachspress (blue), discrete
|
||||
harmonic (red), and mean value (green) coordinates.
|
||||
\cgalFigureEnd
|
||||
|
||||
From the figure above we observe that the \f$O(n^2)\f$ algorithm is as fast
|
||||
as the \f$O(n)\f$ algorithm if we have a polygon with a small number of vertices.
|
||||
From the figure above we observe that the \cgalBigO{n^2} algorithm is as fast
|
||||
as the \cgalBigO{n} algorithm if we have a polygon with a small number of vertices.
|
||||
But as the number of vertices is increased, the linear algorithm outperforms the squared one,
|
||||
as expected. One of the reasons for this behavior is that for a small number of vertices
|
||||
the multiplications of \f$n-2\f$ elements inside the \f$O(n^2)\f$ algorithm take almost the
|
||||
same time as the corresponding divisions in the \f$O(n)\f$ algorithm. For a polygon with
|
||||
the multiplications of \f$n-2\f$ elements inside the \cgalBigO{n^2} algorithm take almost the
|
||||
same time as the corresponding divisions in the \cgalBigO{n} algorithm. For a polygon with
|
||||
many vertices, these multiplications are substantially slower.
|
||||
|
||||
To benchmark harmonic coordinates, we used a MacBook Pro 2018 with 2.2 GHz Intel Core i7 processor (6 cores)
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ We implement Khachyian's algorithm for rounding
|
|||
polytopes \cgalCite{cgal:k-rprnm-96}. Internally, we use
|
||||
`double`-arithmetic and (initially a single)
|
||||
Cholesky-decomposition. The algorithm's running time is
|
||||
\f$ {\cal O}(nd^2(\epsilon^{-1}+\ln d + \ln\ln(n)))\f$, where \f$ n=|P|\f$ and
|
||||
\cgalBigO{nd^2(\epsilon^{-1}+\ln d + \ln\ln(n))}, where \f$ n=|P|\f$ and
|
||||
\f$ 1+\epsilon\f$ is the desired approximation ratio.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ We implement two algorithms, the LP-algorithm and a
|
|||
heuristic \cgalCite{msw-sblp-92}. As described in the documentation of
|
||||
concept `MinSphereOfSpheresTraits`, each has its advantages and
|
||||
disadvantages: Our implementation of the LP-algorithm has maximal
|
||||
expected running time \f$ O(2^d n)\f$, while the heuristic comes without
|
||||
expected running time \cgalBigO{2^d n}, while the heuristic comes without
|
||||
any complexity guarantee. In particular, the LP-algorithm runs in
|
||||
linear time for fixed dimension \f$ d\f$. (These running times hold for the
|
||||
arithmetic model, so they count the number of operations on
|
||||
|
|
|
|||
|
|
@ -245,7 +245,7 @@ must be a model for `RectangularPCenterTraits_2`.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The runtime is linear for \f$ p \in \{2,\,3\}\f$ and
|
||||
\f$ \mathcal{O}(n \cdot \log n)\f$ for \f$ p = 4\f$ where \f$ n\f$ is the number of
|
||||
\cgalBigO{n \cdot \log n} for \f$ p = 4\f$ where \f$ n\f$ is the number of
|
||||
input points. These runtimes are worst case optimal. The \f$ 3\f$-center
|
||||
algorithm uses a prune-and-search technique described in
|
||||
\cgalCite{cgal:h-slacr-99}. The \f$ 4\f$-center implementation uses sorted matrix
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ The recommended choice is the first, which is a synonym to the one
|
|||
of the other two methods which we consider "the best in practice."
|
||||
In case of `CGAL::LP_algorithm`, the minsphere will be computed
|
||||
using the LP-algorithm \cgalCite{msw-sblp-92}, which in our
|
||||
implementation has maximal expected running time \f$ O(2^d n)\f$ (in the
|
||||
implementation has maximal expected running time \cgalBigO{2^d n} (in the
|
||||
number of operations on the number type `FT`). In case of
|
||||
`CGAL::Farthest_first_heuristic`, a simple heuristic will be
|
||||
used instead which seems to work fine in practice, but comes without
|
||||
|
|
|
|||
|
|
@ -350,12 +350,12 @@ parameter the function switches from the streamed segment-tree
|
|||
algorithm to the two-way-scan algorithm, see \cgalCite{cgal:ze-fsbi-02}
|
||||
for the details.
|
||||
|
||||
The streamed segment-tree algorithm needs \f$ O(n \log^d (n) + k)\f$
|
||||
worst-case running time and \f$ O(n)\f$ space, where \f$ n\f$ is the number of
|
||||
The streamed segment-tree algorithm needs \cgalBigO{n \log^d (n) + k}
|
||||
worst-case running time and \cgalBigO{n} space, where \f$ n\f$ is the number of
|
||||
boxes in both input sequences, \f$ d\f$ the (constant) dimension of the
|
||||
boxes, and \f$ k\f$ the output complexity, i.e., the number of pairwise
|
||||
intersections of the boxes. The two-way-scan algorithm needs \f$ O(n \log
|
||||
(n) + l)\f$ worst-case running time and \f$ O(n)\f$ space, where \f$ l\f$ is the
|
||||
intersections of the boxes. The two-way-scan algorithm needs \cgalBigO{n \log
|
||||
(n) + l} worst-case running time and \cgalBigO{n} space, where \f$ l\f$ is the
|
||||
number of pairwise overlapping intervals in one dimensions (the
|
||||
dimension where the algorithm is used instead of the segment tree).
|
||||
Note that \f$ l\f$ is not necessarily related to \f$ k\f$ and using the
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ namespace CGAL {
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The algorithm is trivially testing all pairs and runs therefore in time
|
||||
\f$ O(nm)\f$ where \f$ n\f$ is the size of the first sequence and \f$ m\f$ is the
|
||||
\cgalBigO{nm} where \f$ n\f$ is the size of the first sequence and \f$ m\f$ is the
|
||||
size of the second sequence.
|
||||
*/
|
||||
|
||||
|
|
@ -219,12 +219,12 @@ void box_intersection_all_pairs_d(
|
|||
algorithm to the two-way-scan algorithm, see \cgalCite{cgal:ze-fsbi-02}
|
||||
for the details.
|
||||
|
||||
The streamed segment-tree algorithm needs \f$ O(n \log^d (n) + k)\f$
|
||||
worst-case running time and \f$ O(n)\f$ space, where \f$ n\f$ is the number of
|
||||
The streamed segment-tree algorithm needs \cgalBigO{n \log^d (n) + k}
|
||||
worst-case running time and \cgalBigO{n} space, where \f$ n\f$ is the number of
|
||||
boxes in both input sequences, \f$ d\f$ the (constant) dimension of the
|
||||
boxes, and \f$ k\f$ the output complexity, i.e., the number of pairwise
|
||||
intersections of the boxes. The two-way-scan algorithm needs \f$ O(n \log
|
||||
(n) + l)\f$ worst-case running time and \f$ O(n)\f$ space, where \f$ l\f$ is the
|
||||
intersections of the boxes. The two-way-scan algorithm needs \cgalBigO{n \log
|
||||
(n) + l} worst-case running time and \cgalBigO{n} space, where \f$ l\f$ is the
|
||||
number of pairwise overlapping intervals in one dimensions (the
|
||||
dimension where the algorithm is used instead of the segment tree).
|
||||
Note that \f$ l\f$ is not necessarily related to \f$ k\f$ and using the
|
||||
|
|
@ -397,7 +397,7 @@ namespace CGAL {
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The algorithm is trivially testing all pairs and runs therefore in time
|
||||
\f$ O(n^2)\f$ where \f$ n\f$ is the size of the input sequence. This algorithm
|
||||
\cgalBigO{n^2} where \f$ n\f$ is the size of the input sequence. This algorithm
|
||||
does not use the id-number of the boxes.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -297,7 +297,7 @@ Several functions allow to create specific configurations of darts into a combin
|
|||
|
||||
\subsection ssecadvmarks Boolean Marks
|
||||
|
||||
It is often necessary to mark darts, for example to retrieve in <I>O(1)</I> if a given dart was already processed during a specific algorithm, for example, iteration over a given range. Users can also mark specific parts of a combinatorial map (for example mark all the darts belonging to objects having specific semantics). To answer these needs, a `GenericMap` has a certain number of Boolean marks (fixed by the constant \link GenericMap::NB_MARKS `NB_MARKS`\endlink). When one wants to use a Boolean mark, the following methods are available (with `cm` an instance of a combinatorial map):
|
||||
It is often necessary to mark darts, for example to retrieve in \cgalBigO{1} if a given dart was already processed during a specific algorithm, for example, iteration over a given range. Users can also mark specific parts of a combinatorial map (for example mark all the darts belonging to objects having specific semantics). To answer these needs, a `GenericMap` has a certain number of Boolean marks (fixed by the constant \link GenericMap::NB_MARKS `NB_MARKS`\endlink). When one wants to use a Boolean mark, the following methods are available (with `cm` an instance of a combinatorial map):
|
||||
<ul>
|
||||
<li> get a new free mark: `size_type m = cm.`\link GenericMap::get_new_mark `get_new_mark()`\endlink (throws the exception Exception_no_more_available_mark if no mark is available);
|
||||
<li> set mark `m` for a given dart `d0`: `cm.`\link GenericMap::mark `mark(d0,m)`\endlink;
|
||||
|
|
|
|||
|
|
@ -1154,7 +1154,7 @@ namespace CGAL {
|
|||
}
|
||||
|
||||
/** Unmark all the darts of the map for a given mark.
|
||||
* If all the darts are marked or unmarked, this operation takes O(1)
|
||||
* If all the darts are marked or unmarked, this operation takes \cgalBigO{1}
|
||||
* operations, otherwise it traverses all the darts of the map.
|
||||
* @param amark the given mark.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ In constructing Theta graphs, this functor uses the algorithm from
|
|||
Chapter 4 of the book by Narasimhan and Smid \cgalCite{cgal:ns-gsn-07}.
|
||||
Basically, it is a sweep line algorithm and uses a
|
||||
balanced search tree to store the vertices that have already been scanned.
|
||||
It has the complexity of \f$O(n \log n)\f$, where \f$n\f$ is the number of vertices in the plane.
|
||||
It has the complexity of \cgalBigO{n \log n}, where \f$n\f$ is the number of vertices in the plane.
|
||||
This complexity has been proved to be optimal.
|
||||
|
||||
For more details on how to use this `Construct_theta_graph_2` functor to write an application to build Theta graphs,
|
||||
|
|
@ -178,13 +178,13 @@ The functor `Construct_yao_graph_2` has a similar definition as `Construct_theta
|
|||
|
||||
The way of using these two template parameters is the same as that of `Construct_theta_graph_2`,
|
||||
so please refer to the previous subsection for the details. We note here that construction algorithm for Yao graph
|
||||
is a slight adaptation of the algorithm for constructing Theta graph, having a complexity of \f$O(n^2)\f$.
|
||||
is a slight adaptation of the algorithm for constructing Theta graph, having a complexity of \cgalBigO{n^2}.
|
||||
The increase of complexity in this adaptation is because in constructing Theta graph,
|
||||
the searching of the 'closest' node by projection distance can be done by a balanced search tree,
|
||||
but in constructing Yao graph, the searching of the 'closest' node by Euclidean distance cannot be
|
||||
done by a balanced search tree.
|
||||
|
||||
Note that an optimal algorithm for constructing Yao graph with a complexity of \f$O(n \log n)\f$ is
|
||||
Note that an optimal algorithm for constructing Yao graph with a complexity of \cgalBigO{n \log n} is
|
||||
described in \cgalCite{cgal:cht-oacov-90}. However, this algorithm is much more complex to implement than
|
||||
the current algorithm implemented, and it can hardly reuse the codes for constructing Theta graphs,
|
||||
so it is not implemented in this package right now.
|
||||
|
|
|
|||
|
|
@ -18,11 +18,11 @@ Minkowski sums of the convex pieces, and unite the pair-wise sums.
|
|||
|
||||
While it is desirable to have a decomposition into a minimum number of
|
||||
pieces, this problem is known to be NP-hard \cgalCite{c-cpplb-84}. Our
|
||||
implementation decomposes a Nef polyhedron \f$ N\f$ into \f$ O(r^2)\f$ convex
|
||||
implementation decomposes a Nef polyhedron \f$ N\f$ into \cgalBigO{r^2} convex
|
||||
pieces, where \f$ r\f$ is the number of edges that have two adjacent
|
||||
facets that span an angle of more than 180 degrees with respect to the
|
||||
interior of the polyhedron. Those edges are also called reflex edges.
|
||||
The bound of \f$ O(r^2)\f$ convex pieces is worst-case
|
||||
The bound of \cgalBigO{r^2} convex pieces is worst-case
|
||||
optimal \cgalCite{c-cpplb-84}.
|
||||
|
||||
\cgalFigureBegin{figverticalDecomposition,two_cubes_all_in_one.png}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@
|
|||
\cgalPkgPicture{Convex_decomposition_3/fig/Convex_decomposition_3-teaser.png}
|
||||
\cgalPkgSummaryBegin
|
||||
\cgalPkgAuthor{Peter Hachenberger}
|
||||
\cgalPkgDesc{This packages provides a function for decomposing a bounded polyhedron into convex sub-polyhedra. The decomposition yields \f$ O(r^2)\f$ convex pieces, where \f$ r\f$ is the number of edges, whose adjacent facets form an angle of more than 180 degrees with respect to the polyhedron's interior. This bound is worst-case optimal. }
|
||||
\cgalPkgDesc{This packages provides a function for decomposing a bounded polyhedron into convex sub-polyhedra. The decomposition yields \cgalBigO{r^2} convex pieces, where \f$ r\f$ is the number of edges, whose adjacent facets form an angle of more than 180 degrees with respect to the polyhedron's interior. This bound is worst-case optimal. }
|
||||
\cgalPkgManuals{Chapter_Convex_Decomposition_of_Polyhedra,PkgConvexDecomposition3Ref}
|
||||
\cgalPkgSummaryEnd
|
||||
\cgalPkgShortInfoBegin
|
||||
|
|
|
|||
|
|
@ -41,12 +41,12 @@ The function `convex_decomposition_3()` inserts additional facets
|
|||
into the given `Nef_polyhedron_3` `N`, such that each bounded
|
||||
marked volume (the outer volume is unbounded) is subdivided into convex
|
||||
pieces. The modified polyhedron represents a decomposition into
|
||||
\f$ O(r^2)\f$ convex pieces, where \f$ r\f$ is the number of edges that have two
|
||||
\cgalBigO{r^2} convex pieces, where \f$ r\f$ is the number of edges that have two
|
||||
adjacent facets that span an angle of more than 180 degrees with
|
||||
respect to the interior of the polyhedron.
|
||||
|
||||
The worst-case running time of our implementation is
|
||||
\f$ O(n^2r^4\sqrt[3]{nr^2}\log{(nr)})\f$, where \f$ n\f$ is the complexity of
|
||||
\cgalBigO{n^2r^4\sqrt[3]{nr^2}\log{(nr)}}, where \f$ n\f$ is the complexity of
|
||||
the polyhedron (the complexity of a `Nef_polyhedron_3` is the sum
|
||||
of its `Vertices`, `Halfedges` and `SHalfedges`) and \f$ r\f$
|
||||
is the number of reflex edges.
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ functions that return instances of these types:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function uses the algorithm of Akl and
|
||||
Toussaint \cgalCite{at-fcha-78} that requires \f$ O(n \log n)\f$ time for \f$ n\f$ input
|
||||
Toussaint \cgalCite{at-fcha-78} that requires \cgalBigO{n \log n} time for \f$ n\f$ input
|
||||
points.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ functions that return instances of these types:
|
|||
|
||||
This function implements the non-recursive variation of
|
||||
Eddy's algorithm \cgalCite{e-nchap-77} described in \cgalCite{b-chfsp-78}.
|
||||
This algorithm requires \f$ O(n h)\f$ time
|
||||
This algorithm requires \cgalBigO{n h} time
|
||||
in the worst case for \f$ n\f$ input points with \f$ h\f$ extreme points.
|
||||
*/
|
||||
template <class InputIterator, class OutputIterator, class Traits>
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ This function implements Eddy's algorithm
|
|||
\cgalCite{e-nchap-77}, which is the two-dimensional version of the quickhull
|
||||
algorithm \cgalCite{bdh-qach-96}.
|
||||
|
||||
This algorithm requires \f$ O(n h)\f$ time
|
||||
This algorithm requires \cgalBigO{n h} time
|
||||
in the worst case for \f$ n\f$ input points with \f$ h\f$ extreme points.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ functions that return instances of these types:
|
|||
|
||||
This function implements Andrew's variant of the Graham
|
||||
scan algorithm \cgalCite{a-aeach-79} and follows the presentation of Mehlhorn
|
||||
\cgalCite{m-mdscg-84}. This algorithm requires \f$ O(n \log n)\f$ time
|
||||
\cgalCite{m-mdscg-84}. This algorithm requires \cgalBigO{n \log n} time
|
||||
in the worst case for \f$ n\f$ input points.
|
||||
|
||||
|
||||
|
|
@ -101,7 +101,7 @@ functions that return instances of these types:
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This algorithm requires \f$ O(n)\f$ time in the worst case for
|
||||
This algorithm requires \cgalBigO{n} time in the worst case for
|
||||
\f$ n\f$ input points.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ functions that return instances of these types:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function uses the Jarvis march (gift-wrapping)
|
||||
algorithm \cgalCite{j-ichfs-73}. This algorithm requires \f$ O(n h)\f$ time
|
||||
algorithm \cgalCite{j-ichfs-73}. This algorithm requires \cgalBigO{n h} time
|
||||
in the worst case for \f$ n\f$ input points with \f$ h\f$ extreme points.
|
||||
|
||||
*/
|
||||
|
|
@ -97,7 +97,7 @@ functions that return instances of these types:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The function uses the Jarvis march (gift-wrapping) algorithm \cgalCite{j-ichfs-73}.
|
||||
This algorithm requires \f$ O(n h)\f$ time in the worst
|
||||
This algorithm requires \cgalBigO{n h} time in the worst
|
||||
case for \f$ n\f$ input points with \f$ h\f$ extreme points
|
||||
|
||||
\pre `start_p` and `stop_p` are extreme points with respect to
|
||||
|
|
|
|||
|
|
@ -47,9 +47,9 @@ functions that return instances of these types:
|
|||
One of two algorithms is used,
|
||||
depending on the type of iterator used to specify the input points. For
|
||||
input iterators, the algorithm used is that of Bykat \cgalCite{b-chfsp-78}, which
|
||||
has a worst-case running time of \f$ O(n h)\f$, where \f$ n\f$ is the number of input
|
||||
has a worst-case running time of \cgalBigO{n h}, where \f$ n\f$ is the number of input
|
||||
points and \f$ h\f$ is the number of extreme points. For all other types of
|
||||
iterators, the \f$ O(n \log n)\f$ algorithm of of Akl and Toussaint
|
||||
iterators, the \cgalBigO{n \log n} algorithm of of Akl and Toussaint
|
||||
\cgalCite{at-fcha-78} is used.
|
||||
|
||||
|
||||
|
|
@ -128,7 +128,7 @@ functions that return instances of these types:
|
|||
|
||||
This function uses Andrew's variant of Graham's scan algorithm
|
||||
\cgalCite{a-aeach-79}, \cgalCite{m-mdscg-84}. The algorithm has worst-case running time
|
||||
of \f$ O(n \log n)\f$ for \f$ n\f$ input points.
|
||||
of \cgalBigO{n \log n} for \f$ n\f$ input points.
|
||||
|
||||
|
||||
*/
|
||||
|
|
@ -192,7 +192,7 @@ functions that return instances of these types:
|
|||
|
||||
This function uses Andrew's
|
||||
variant of Graham's scan algorithm \cgalCite{a-aeach-79}, \cgalCite{m-mdscg-84}. The algorithm
|
||||
has worst-case running time of \f$ O(n \log n)\f$ for \f$ n\f$ input points.
|
||||
has worst-case running time of \cgalBigO{n \log n} for \f$ n\f$ input points.
|
||||
|
||||
*/
|
||||
template <class InputIterator, class OutputIterator>
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ functions that return instances of these types:
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
The algorithm requires \f$ O(n)\f$ time for a set of \f$ n\f$ input points.
|
||||
The algorithm requires \cgalBigO{n} time for a set of \f$ n\f$ input points.
|
||||
|
||||
|
||||
|
||||
|
|
@ -80,7 +80,7 @@ functions that return instances of these types:
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
The algorithm requires \f$ O(n)\f$ time for a set of \f$ n\f$ input points.
|
||||
The algorithm requires \cgalBigO{n} time for a set of \f$ n\f$ input points.
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -52,17 +52,17 @@ class need not be specified and defaults to types and operations defined
|
|||
in the kernel in which the input point type is defined.
|
||||
|
||||
Given a sequence of \f$ n\f$ input points with \f$ h\f$ extreme points,
|
||||
the function `convex_hull_2()` uses either the output-sensitive \f$ O(n h)\f$ algorithm of Bykat \cgalCite{b-chfsp-78}
|
||||
(a non-recursive version of the quickhull \cgalCite{bdh-qach-96} algorithm) or the algorithm of Akl and Toussaint, which requires \f$ O(n \log n)\f$ time
|
||||
the function `convex_hull_2()` uses either the output-sensitive \cgalBigO{n h} algorithm of Bykat \cgalCite{b-chfsp-78}
|
||||
(a non-recursive version of the quickhull \cgalCite{bdh-qach-96} algorithm) or the algorithm of Akl and Toussaint, which requires \cgalBigO{n \log n} time
|
||||
in the worst case. The algorithm chosen depends on the kind of
|
||||
iterator used to specify the input points. These two algorithms are
|
||||
also available via the functions `ch_bykat()` and `ch_akl_toussaint()`,
|
||||
respectively. Also available are
|
||||
the \f$ O(n \log n)\f$ Graham-Andrew scan algorithm \cgalCite{a-aeach-79}, \cgalCite{m-mdscg-84}
|
||||
the \cgalBigO{n \log n} Graham-Andrew scan algorithm \cgalCite{a-aeach-79}, \cgalCite{m-mdscg-84}
|
||||
(`ch_graham_andrew()`),
|
||||
the \f$ O(n h)\f$ Jarvis march algorithm \cgalCite{j-ichfs-73}
|
||||
the \cgalBigO{n h} Jarvis march algorithm \cgalCite{j-ichfs-73}
|
||||
(`ch_jarvis()`),
|
||||
and Eddy's \f$ O(n h)\f$ algorithm \cgalCite{e-nchap-77}
|
||||
and Eddy's \cgalBigO{n h} algorithm \cgalCite{e-nchap-77}
|
||||
(`ch_eddy()`), which corresponds to the
|
||||
two-dimensional version of the quickhull algorithm.
|
||||
The linear-time algorithm of Melkman for producing the convex hull of
|
||||
|
|
@ -105,7 +105,7 @@ provide the computation of the counterclockwise
|
|||
sequence of extreme points on the lower hull and upper hull,
|
||||
respectively. The algorithm used in these functions is
|
||||
Andrew's variant of Graham's scan algorithm \cgalCite{a-aeach-79}, \cgalCite{m-mdscg-84},
|
||||
which has worst-case running time of \f$ O(n \log n)\f$.
|
||||
which has worst-case running time of \cgalBigO{n \log n}.
|
||||
|
||||
There are also functions available for computing certain subsequences
|
||||
of the sequence of extreme points on the convex hull. The function
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ vertices of the convex hull).
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function implements the tests described in \cgalCite{mnssssu-cgpvg-96} to
|
||||
determine convexity and requires \f$ O(e + f)\f$ time for a polyhedron with
|
||||
determine convexity and requires \cgalBigO{e + f} time for a polyhedron with
|
||||
\f$ e\f$ edges and \f$ f\f$ faces.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ The time and space requirements are input dependent. Let \f$C_1\f$, \f$C_2\f$, \
|
|||
let \f$ k_i\f$ be the number of facets of \f$ C_i\f$ that are visible from \f$ x\f$
|
||||
and that are not already facets of \f$ C_{i-1}\f$.
|
||||
|
||||
Then the time for inserting \f$ x\f$ is \f$ O(dim \sum_i k_i)\f$ and
|
||||
Then the time for inserting \f$ x\f$ is \cgalBigO{dim \sum_i k_i} and
|
||||
the number of new simplices constructed during the insertion of \f$x\f$
|
||||
is the number of facets of the hull which were not already facets
|
||||
of the hull before the insertion.
|
||||
|
|
|
|||
|
|
@ -169,7 +169,7 @@ complexity are known. Also, the theoretic interest in efficiency for
|
|||
realistic inputs, as opposed to worst-case situations, is
|
||||
growing \cgalCite{v-ffrim-97}.
|
||||
For practical purposes, insight into the constant factors hidden in the
|
||||
\f$ O\f$-notation is necessary, especially if there are several competing
|
||||
\cgalBigO{ }-notation is necessary, especially if there are several competing
|
||||
algorithms.
|
||||
|
||||
Therefore, different implementations should be supplied if there is
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
\authors Stefan Schirra
|
||||
|
||||
The layer of geometry kernels provides
|
||||
basic geometric entities of constant size\cgalFootnote{In dimension \f$ d\f$, an entity of size \f$ O(d)\f$ is considered to be of constant size.} and
|
||||
basic geometric entities of constant size\cgalFootnote{In dimension \f$ d\f$, an entity of size \cgalBigO{d} is considered to be of constant size.} and
|
||||
primitive operations on them. Each entity is provided as both a
|
||||
stand-alone class, which is parameterized by a kernel class, and as a
|
||||
type in the kernel class. Each operation in the kernel is provided via
|
||||
|
|
|
|||
|
|
@ -188,7 +188,9 @@ ALIASES = "cgal=%CGAL" \
|
|||
"cgalParamNEnd=</ul> \htmlonly[block] </div> \endhtmlonly </td><td></td></tr>" \
|
||||
"cgalParamSectionBegin{1}=\cgalParamNBegin{\1}" \
|
||||
"cgalParamSectionEnd=\cgalParamNEnd" \
|
||||
"cgalParamPrecondition{1}=<li><b>Precondition: </b>\1</li>"
|
||||
"cgalParamPrecondition{1}=<li><b>Precondition: </b>\1</li>" \
|
||||
"cgalBigO{1}=\f$O(\1)\f$" \
|
||||
"cgalBigOLarge{1}=\f$O\left(\1\right)\f$"
|
||||
|
||||
# Doxygen selects the parser to use depending on the extension of the files it
|
||||
# parses. With this tag you can assign which parser to use for a given
|
||||
|
|
|
|||
|
|
@ -197,7 +197,9 @@ ALIASES = "cgal=%CGAL" \
|
|||
"cgalParamNEnd=</ul> \htmlonly[block] </div> \endhtmlonly </td><td></td></tr>" \
|
||||
"cgalParamSectionBegin{1}=\cgalParamNBegin{\1}" \
|
||||
"cgalParamSectionEnd=\cgalParamNEnd" \
|
||||
"cgalParamPrecondition{1}=<li><b>Precondition: </b>\1</li>"
|
||||
"cgalParamPrecondition{1}=<li><b>Precondition: </b>\1</li>" \
|
||||
"cgalBigO{1}=\f$O(\1)\f$" \
|
||||
"cgalBigOLarge{1}=\f$O\left(\1\right)\f$"
|
||||
|
||||
# Doxygen selects the parser to use depending on the extension of the files it
|
||||
# parses. With this tag you can assign which parser to use for a given
|
||||
|
|
|
|||
|
|
@ -296,7 +296,7 @@ Several functions allow to create specific configurations of darts into a genera
|
|||
|
||||
\subsection ssecadvmarksgmap Boolean Marks
|
||||
|
||||
It is often necessary to mark darts, for example to retrieve in <I>O(1)</I> if a given dart was already processed during a specific algorithm, for example, iteration over a given range. Users can also mark specific parts of a generalized map (for example mark all the darts belonging to objects having specific semantics). To answer these needs, a `GeneralizedMap` has a certain number of Boolean marks (fixed by the constant \link GenericMap::NB_MARKS `NB_MARKS`\endlink). When one wants to use a Boolean mark, the following methods are available (with `gm` an instance of a generalized map):
|
||||
It is often necessary to mark darts, for example to retrieve in \cgalBigO{1} if a given dart was already processed during a specific algorithm, for example, iteration over a given range. Users can also mark specific parts of a generalized map (for example mark all the darts belonging to objects having specific semantics). To answer these needs, a `GeneralizedMap` has a certain number of Boolean marks (fixed by the constant \link GenericMap::NB_MARKS `NB_MARKS`\endlink). When one wants to use a Boolean mark, the following methods are available (with `gm` an instance of a generalized map):
|
||||
<ul>
|
||||
<li> get a new free mark: `size_type m = gm.`\link GenericMap::get_new_mark `get_new_mark()`\endlink (throws the exception Exception_no_more_available_mark if no mark is available);
|
||||
<li> set mark `m` for a given dart `d0`: `gm.`\link GenericMap::mark `mark(d0,m)`\endlink;
|
||||
|
|
|
|||
|
|
@ -1038,7 +1038,7 @@ namespace CGAL {
|
|||
}
|
||||
|
||||
/** Unmark all the darts of the map for a given mark.
|
||||
* If all the darts are marked or unmarked, this operation takes O(1)
|
||||
* If all the darts are marked or unmarked, this operation takes \cgalBigO{1}
|
||||
* operations, otherwise it traverses all the darts of the map.
|
||||
* @param amark the given mark.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -28,8 +28,8 @@ The generated polygon will have an average number of vertices \f$ n^\frac{1}{3}(
|
|||
|
||||
The implementation is based on an incremental construction of a convex hull. At each step, we choose a number of points to pick uniformly at random in the disc. Then, a subset of these points, that won't change the convex hull, is evaluated using a Binomial law.
|
||||
As these points won't be generated, the time and size complexities are reduced \cgalCite{Devillers2014Generator}.
|
||||
A tradeoff between time and memory is provided with the option `fast`, true by default. Using the `fast` option, both time and size expected complexities are \f$O\left(n^\frac{1}{3}\log^\frac{2}{3}n \right)\f$.
|
||||
If this option is disabled, the expected size complexity becomes \f$O\left(n^\frac{1}{3}\right)\f$ but the expected time complexity becomes \f$O\left(n^\frac{1}{3}\log^2 n \right)\f$.
|
||||
A tradeoff between time and memory is provided with the option `fast`, true by default. Using the `fast` option, both time and size expected complexities are \cgalBigOLarge{n^\frac{1}{3}\log^\frac{2}{3}n}.
|
||||
If this option is disabled, the expected size complexity becomes \cgalBigOLarge{n^\frac{1}{3}} but the expected time complexity becomes \cgalBigOLarge{n^\frac{1}{3}\log^2 n}.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
||||
|
|
|
|||
|
|
@ -31,8 +31,8 @@ R >` for some representation class `R`,
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The implementation uses the centroid method
|
||||
described in \cgalCite{cgal:s-zkm-96} and has a worst case running time of \f$ O(r
|
||||
\cdot n + n \cdot \log n)\f$, where \f$ r\f$ is the time needed by `pg`
|
||||
described in \cgalCite{cgal:s-zkm-96} and has a worst case running time of \cgalBigO{r
|
||||
\cdot n + n \cdot \log n}, where \f$ r\f$ is the time needed by `pg`
|
||||
to generate a random point.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
|
|||
|
|
@ -37,11 +37,11 @@ The default traits class `Default_traits` is the kernel in which
|
|||
The implementation is based on the method of eliminating self-intersections in
|
||||
a polygon by using so-called "2-opt" moves. Such a move eliminates an
|
||||
intersection between two edges by reversing the order of the vertices between
|
||||
the edges. No more than \f$ O(n^3)\f$ such moves are required to simplify a polygon
|
||||
the edges. No more than \cgalBigO{n^3} such moves are required to simplify a polygon
|
||||
defined on \f$ n\f$ points \cgalCite{ls-utstp-82}.
|
||||
Intersecting edges are detected using a simple sweep through the vertices
|
||||
and then one intersection is chosen at random to eliminate after each sweep.
|
||||
The worse-case running time is therefore \f$ O(n^4 \log n)\f$.
|
||||
The worse-case running time is therefore \cgalBigO{n^4 \log n}.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ removal.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
Currently, `HalfedgeDS_default` is derived from `CGAL::HalfedgeDS_list<Traits>`.
|
||||
The copy constructor and the assignment operator need \f$ O(n)\f$ time with
|
||||
The copy constructor and the assignment operator need \cgalBigO{n} time with
|
||||
\f$ n\f$ the total number of vertices, halfedges, and faces.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ iterators that supports removal.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
`HalfedgeDS_list` uses internally the `CGAL::In_place_list` container class.
|
||||
The copy constructor and the assignment operator need \f$ O(n)\f$ time with
|
||||
The copy constructor and the assignment operator need \cgalBigO{n} time with
|
||||
\f$ n\f$ the total number of vertices, halfedges, and faces.
|
||||
|
||||
`CGAL_ALLOCATOR(int)` is used as default argument for the
|
||||
|
|
|
|||
|
|
@ -223,8 +223,8 @@ The algorithm is as follows:
|
|||
|
||||
The time complexity of the algorithm is determined primarily by the
|
||||
choice of linear solver. In the current implementation, Cholesky
|
||||
prefactorization is roughly \f$ O(N^{1.5})\f$ and computation of distances is
|
||||
roughly \f$ O(N)\f$, where \f$ N\f$ is the number of vertices in the triangulation.
|
||||
prefactorization is roughly \cgalBigO{N^{1.5}} and computation of distances is
|
||||
roughly \cgalBigO{N}, where \f$ N\f$ is the number of vertices in the triangulation.
|
||||
The algorithm uses two \f$ N \times N\f$ matrices, both with the same pattern of
|
||||
non-zeros (in average 7 non-zeros
|
||||
per row/column). The cost of computation is independent of the size
|
||||
|
|
|
|||
|
|
@ -14,8 +14,8 @@ that do not contain any point of the point set.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The algorithm is an implementation of \cgalCite{o-naler-90}. The runtime of an
|
||||
insertion or a removal is \f$ O(\log n)\f$. A query takes \f$ O(n^2)\f$ worst
|
||||
case time and \f$ O(n \log n)\f$ expected time. The working storage is \f$
|
||||
insertion or a removal is \cgalBigO{\log n}. A query takes \cgalBigO{n^2} worst
|
||||
case time and \cgalBigO{n \log n} expected time. The working storage is \f$
|
||||
O(n)\f$.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -32,8 +32,8 @@ convex polygon (oriented clock- or counterclockwise).
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The implementation uses monotone matrix search
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \f$ O(k
|
||||
\cdot n + n \cdot \log n)\f$, where \f$ n\f$ is the number of vertices in
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \cgalBigO{k
|
||||
\cdot n + n \cdot \log n}, where \f$ n\f$ is the number of vertices in
|
||||
\f$ P\f$.
|
||||
|
||||
*/
|
||||
|
|
@ -89,8 +89,8 @@ where `K` is a model of `Kernel`.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The implementation uses monotone matrix search
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \f$ O(k
|
||||
\cdot n + n \cdot \log n)\f$, where \f$ n\f$ is the number of vertices in
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \cgalBigO{k
|
||||
\cdot n + n \cdot \log n}, where \f$ n\f$ is the number of vertices in
|
||||
\f$ P\f$.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
@ -158,8 +158,8 @@ defined that computes the squareroot of a number.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The implementation uses monotone matrix search
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \f$ O(k
|
||||
\cdot n + n \cdot \log n)\f$, where \f$ n\f$ is the number of vertices in
|
||||
\cgalCite{akmsw-gamsa-87} and has a worst case running time of \cgalBigO{k
|
||||
\cdot n + n \cdot \log n}, where \f$ n\f$ is the number of vertices in
|
||||
\f$ P\f$.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ namespace CGAL {
|
|||
|
||||
The algorithm checks all the empty rectangles that are bounded by either
|
||||
points or edges of the bounding box (other empty rectangles can be enlarged
|
||||
and remain empty). There are O(n^2) such rectangles. It is done in three
|
||||
and remain empty). There are \cgalBigO{n^2} such rectangles. It is done in three
|
||||
phases. In the first one empty rectangles that are bounded by two opposite
|
||||
edges of the bounding box are checked. In the second one, other empty
|
||||
rectangles that are bounded by one or two edges of the bounding box are
|
||||
|
|
|
|||
|
|
@ -9,10 +9,10 @@ allows to find all members of a set of intervals that overlap a point.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
The insertion and deletion of a segment in the interval skip list
|
||||
takes expected time \f$ O(\log^2 n)\f$, if the segment endpoints are
|
||||
takes expected time \cgalBigO{\log^2 n}, if the segment endpoints are
|
||||
chosen from a continuous distribution. A stabbing query takes expected
|
||||
time \f$ O(\log n)\f$, and finding all intervals that contain a point
|
||||
takes expected time \f$ O(\log n + k)\f$, where \f$ k\f$ is the number of
|
||||
time \cgalBigO{\log n}, and finding all intervals that contain a point
|
||||
takes expected time \cgalBigO{\log n + k}, where \f$ k\f$ is the number of
|
||||
intervals.
|
||||
|
||||
The implementation is based on the code developed by Eric N. Hansen.
|
||||
|
|
|
|||
|
|
@ -17,10 +17,10 @@ by the constructors below.
|
|||
Affine Transformations are implemented by matrices of number type
|
||||
`RT` as a handle type. All operations like creation,
|
||||
initialization, input and output on a transformation \f$ t\f$ take time
|
||||
\f$ O(t.dimension()^2)\f$. `dimension()` takes constant time.
|
||||
\cgalBigO{t.dimension()^2}. `dimension()` takes constant time.
|
||||
The operations for inversion and composition have the cubic costs of
|
||||
the used matrix operations. The space requirement is
|
||||
\f$ O(t.dimension()^2)\f$.
|
||||
\cgalBigO{t.dimension()^2}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -22,9 +22,9 @@ We provide the operations of the lower dimensional interface `dx()`,
|
|||
|
||||
Directions are implemented by arrays of integers as an item type. All
|
||||
operations like creation, initialization, tests, inversion, input and
|
||||
output on a direction \f$ d\f$ take time \f$ O(d.\mathit{dimension}())\f$.
|
||||
output on a direction \f$ d\f$ take time \cgalBigO{d.\mathit{dimension}()}.
|
||||
`dimension()`, coordinate access and conversion take constant
|
||||
time. The space requirement is \f$ O(d.\mathit{dimension}())\f$.
|
||||
time. The space requirement is \cgalBigO{d.\mathit{dimension}()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -27,8 +27,8 @@ other.
|
|||
Hyperplanes are implemented by arrays of integers as an item type.
|
||||
All operations like creation, initialization, tests, vector
|
||||
arithmetic, input and output on a hyperplane \f$ h\f$ take time
|
||||
\f$ O(h.dimension())\f$. coordinate access and `dimension()` take
|
||||
constant time. The space requirement is \f$ O(h.dimension())\f$.
|
||||
\cgalBigO{h.dimension()}. coordinate access and `dimension()` take
|
||||
constant time. The space requirement is \cgalBigO{h.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -11,10 +11,10 @@ An instance of data type `Line_d` is an oriented line in
|
|||
Lines are implemented by a pair of points as an item type. All
|
||||
operations like creation, initialization, tests, direction
|
||||
calculation, input and output on a line \f$ l\f$ take time
|
||||
\f$ O(l.dimension())\f$. `dimension()`, coordinate and point
|
||||
\cgalBigO{l.dimension()}. `dimension()`, coordinate and point
|
||||
access, and identity test take constant time. The operations for
|
||||
intersection calculation also take time \f$ O(l.dimension())\f$. The
|
||||
space requirement is \f$ O(l.dimension())\f$.
|
||||
intersection calculation also take time \cgalBigO{l.dimension()}. The
|
||||
space requirement is \cgalBigO{l.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -24,9 +24,9 @@ dimensional interface `x()`, `y()`, `z()`, `hx()`,
|
|||
|
||||
Points are implemented by arrays of `RT` items. All operations
|
||||
like creation, initialization, tests, point - vector arithmetic, input
|
||||
and output on a point \f$ p\f$ take time \f$ O(p.dimension())\f$.
|
||||
and output on a point \f$ p\f$ take time \cgalBigO{p.dimension()}.
|
||||
`dimension()`, coordinate access and conversions take constant
|
||||
time. The space requirement for points is \f$ O(p.dimension())\f$.
|
||||
time. The space requirement for points is \cgalBigO{p.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -12,9 +12,9 @@ it goes to infinity.
|
|||
Rays are implemented by a pair of points as an item type. All
|
||||
operations like creation, initialization, tests, direction
|
||||
calculation, input and output on a ray \f$ r\f$ take time
|
||||
\f$ O(r.dimension())\f$. `dimension()`, coordinate and point
|
||||
\cgalBigO{r.dimension()}. `dimension()`, coordinate and point
|
||||
access, and identity test take constant time. The space requirement is
|
||||
\f$ O(r.dimension())\f$.
|
||||
\cgalBigO{r.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -14,11 +14,11 @@ called the target point of \f$ s\f$, both points are called endpoints of
|
|||
Segments are implemented by a pair of points as an item type. All
|
||||
operations like creation, initialization, tests, the calculation of
|
||||
the direction and source - target vector, input and output on a
|
||||
segment \f$ s\f$ take time \f$ O(s.dimension())\f$. `dimension()`,
|
||||
segment \f$ s\f$ take time \cgalBigO{s.dimension()}. `dimension()`,
|
||||
coordinate and end point access, and identity test take constant time.
|
||||
The operations for intersection calculation also take time
|
||||
\f$ O(s.dimension())\f$. The space requirement is
|
||||
\f$ O(s.dimension())\f$.
|
||||
\cgalBigO{s.dimension()}. The space requirement is
|
||||
\cgalBigO{s.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -17,11 +17,11 @@ orientation of the defining points, i.e., `orientation(A)`.
|
|||
|
||||
Spheres are implemented by a vector of points as a handle type. All
|
||||
operations like creation, initialization, tests, input and output of a
|
||||
sphere \f$ s\f$ take time \f$ O(s.dimension()) \f$. `dimension()`,
|
||||
sphere \f$ s\f$ take time \cgalBigO{s.dimension()}$. `dimension()`,
|
||||
point access take constant time. The `center()`-operation takes
|
||||
time \f$ O(d^3)\f$ on its first call and constant time thereafter. The
|
||||
sidedness and orientation tests take time \f$ O(d^3)\f$. The space
|
||||
requirement for spheres is \f$ O(s.dimension())\f$ neglecting the
|
||||
time \cgalBigO{d^3} on its first call and constant time thereafter. The
|
||||
sidedness and orientation tests take time \cgalBigO{d^3}. The space
|
||||
requirement for spheres is \cgalBigO{s.dimension()} neglecting the
|
||||
storage room of the points.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -26,9 +26,9 @@ lower dimensional interface `x()`, `y()`, `z()`,
|
|||
|
||||
Vectors are implemented by arrays of variables of type `RT`. All
|
||||
operations like creation, initialization, tests, vector arithmetic,
|
||||
input and output on a vector \f$ v\f$ take time \f$ O(v.dimension())\f$.
|
||||
input and output on a vector \f$ v\f$ take time \cgalBigO{v.dimension()}.
|
||||
coordinate access, `dimension()` and conversions take constant
|
||||
time. The space requirement of a vector is \f$ O(v.dimension())\f$.
|
||||
time. The space requirement of a vector is \cgalBigO{v.dimension()}.
|
||||
|
||||
*/
|
||||
template< typename Kernel >
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ true.
|
|||
|
||||
The implementation uses an algorithm by
|
||||
Frederickson and Johnson\cgalCite{fj-fkppc-83}, \cgalCite{fj-gsrsm-84} and runs in
|
||||
\f$ \mathcal{O}(n \cdot k + f \cdot \log (n \cdot k))\f$, where \f$ n\f$ is
|
||||
\cgalBigO{n \cdot k + f \cdot \log (n \cdot k)}, where \f$ n\f$ is
|
||||
the number of input matrices, \f$ k\f$ denotes the maximal dimension of
|
||||
any input matrix and \f$ f\f$ the time needed for one feasibility test.
|
||||
|
||||
|
|
|
|||
|
|
@ -5,8 +5,8 @@ namespace CGAL {
|
|||
|
||||
The `Greene_convex_decomposition_2` class implements the approximation algorithm of
|
||||
Greene for the decomposition of an input polygon into convex
|
||||
sub-polygons \cgalCite{g-dpcp-83}. This algorithm takes \f$ O(n \log n)\f$
|
||||
time and \f$ O(n)\f$ space, where \f$ n\f$ is the size of the input polygon,
|
||||
sub-polygons \cgalCite{g-dpcp-83}. This algorithm takes \cgalBigO{n \log n}
|
||||
time and \cgalBigO{n} space, where \f$ n\f$ is the size of the input polygon,
|
||||
and outputs a decomposition whose size is guaranteed to be no more
|
||||
than four times the size of the optimal decomposition.
|
||||
|
||||
|
|
@ -38,7 +38,7 @@ and Mehlhorn for decomposing a polygon into convex
|
|||
sub-polygons \cgalCite{hm-ftsp-83}. This algorithm constructs a
|
||||
triangulation of the input polygon and proceeds by removing
|
||||
unnecessary triangulation edges. Given the triangulation, the
|
||||
algorithm requires \f$ O(n)\f$ time and space to construct a convex
|
||||
algorithm requires \cgalBigO{n} time and space to construct a convex
|
||||
decomposition (where \f$ n\f$ is the size of the input polygon), whose
|
||||
size is guaranteed to be no more than four times the size of the
|
||||
optimal decomposition.
|
||||
|
|
@ -69,7 +69,7 @@ namespace CGAL {
|
|||
The `Optimal_convex_decomposition_2` class provides an implementation of Greene's
|
||||
dynamic programming algorithm for optimal decomposition of a
|
||||
polygon into convex sub-polygons \cgalCite{g-dpcp-83}. Note that
|
||||
this algorithm requires \f$ O(n^4)\f$ time and \f$ O(n^3)\f$ space in
|
||||
this algorithm requires \cgalBigO{n^4} time and \cgalBigO{n^3} space in
|
||||
the worst case, where \f$ n\f$ is the size of the input polygon.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -8,8 +8,8 @@ decomposition of a polygon or a polygon with holes into pseudo trapezoids
|
|||
utilizing the CGAL::decompose() free function of the
|
||||
\ref chapterArrangement_on_surface_2 "2D Arrangements" package.
|
||||
|
||||
The algorithm operates in \f$ O(n \log n)\f$ time and takes
|
||||
\f$ O(n)\f$ space at the worst case, where \f$ n\f$ is the
|
||||
The algorithm operates in \cgalBigO{n \log n} time and takes
|
||||
\cgalBigO{n} space at the worst case, where \f$ n\f$ is the
|
||||
size of the input polygon.
|
||||
|
||||
\cgalModels `PolygonWithHolesConvexDecomposition_2`
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ connect two reflex vertices with an edge. When this is not possible any
|
|||
more, it eliminates the reflex vertices one by one by connecting them
|
||||
to other convex vertices, such that the new edge best approximates
|
||||
the angle bisector of the reflex vertex. The algorithm operates in
|
||||
\f$ O(n^2)\f$ time and takes \f$ O(n)\f$ space at the worst case, where
|
||||
\cgalBigO{n^2} time and takes \cgalBigO{n} space at the worst case, where
|
||||
\f$ n\f$ is the size of the input polygon.
|
||||
|
||||
\cgalModels `PolygonConvexDecomposition_2`
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ they form with the \f$ x\f$-axis; see the figure above.
|
|||
The Minkowski sum can therefore be computed using an operation similar to the
|
||||
merge step of the merge-sort algorithm\cgalFootnote{See, for example,
|
||||
<a href="https://en.wikipedia.org/wiki/Merge_sort">
|
||||
https://en.wikipedia.org/wiki/Merge_sort</a>.} in \f$ O(m + n)\f$ time,
|
||||
https://en.wikipedia.org/wiki/Merge_sort</a>.} in \cgalBigO{m + n} time,
|
||||
starting from the two bottommost vertices in \f$ P\f$ and in \f$ Q\f$ and
|
||||
merging the ordered list of edges.
|
||||
|
||||
|
|
@ -294,7 +294,7 @@ the dynamic-programming algorithm of Greene \cgalCite{g-dpcp-83} for
|
|||
computing an optimal decomposition of a polygon into a minimal number
|
||||
of convex sub-polygons. While this algorithm results in a small number
|
||||
of convex polygons, it consumes rather many resources, as it runs in
|
||||
\f$ O(n^4) \f$ time and \f$ O(n^3) \f$ space in the worst case, where
|
||||
\cgalBigO{n^4} time and \cgalBigO{n^3} space in the worst case, where
|
||||
\f$ n \f$ is the number of vertices in the input polygon.
|
||||
|
||||
<LI>The `Hertel_Mehlhorn_convex_decomposition_2<Kernel>` class
|
||||
|
|
@ -302,7 +302,7 @@ template implements the approximation algorithm suggested by Hertel and
|
|||
Mehlhorn \cgalCite{hm-ftsp-83}, which triangulates the input polygon
|
||||
and then discards unnecessary triangulation edges. After triangulation
|
||||
(carried out by the constrained-triangulation procedure of CGAL) the
|
||||
algorithm runs in \f$ O(n) \f$ time and space, and guarantees that the
|
||||
algorithm runs in \cgalBigO{n} time and space, and guarantees that the
|
||||
number of sub-polygons it generates is not more than four times the
|
||||
optimum.
|
||||
|
||||
|
|
@ -310,7 +310,7 @@ optimum.
|
|||
implementation of Greene's approximation algorithm
|
||||
\cgalCite{g-dpcp-83}, which computes a convex decomposition of the
|
||||
polygon based on its partitioning into \f$ y\f$-monotone polygons.
|
||||
This algorithm runs in \f$ O(n \log n)\f$ time and \f$ O(n)\f$ space,
|
||||
This algorithm runs in \cgalBigO{n \log n} time and \cgalBigO{n} space,
|
||||
and has the same guarantee on the quality of approximation as Hertel
|
||||
and Mehlhorn's algorithm.
|
||||
|
||||
|
|
@ -318,7 +318,7 @@ and Mehlhorn's algorithm.
|
|||
template is an implementation of a decomposition algorithm introduced
|
||||
in \cgalCite{cgal:afh-pdecm-02}. It is based on the angle-bisector
|
||||
decomposition method suggested by Chazelle and Dobkin
|
||||
\cgalCite{cd-ocd-85}, which runs in \f$ O(n^2)\f$ time. In addition,
|
||||
\cgalCite{cd-ocd-85}, which runs in \cgalBigO{n^2} time. In addition,
|
||||
it applies a heuristic by Flato that reduces the number of output
|
||||
polygons in many common cases. The convex decompositions that it
|
||||
produces usually yield efficient running times for Minkowski sum
|
||||
|
|
|
|||
|
|
@ -182,7 +182,7 @@ namespace CGAL {
|
|||
|
||||
/// After one or more calls to `AABB_tree_with_join::insert()` the internal data
|
||||
/// structure of the tree must be reconstructed. This procedure
|
||||
/// has a complexity of \f$O(n log(n))\f$, where \f$n\f$ is the number of
|
||||
/// has a complexity of \cgalBigO{n log(n)}, where \f$n\f$ is the number of
|
||||
/// primitives of the tree. This procedure is called implicitly
|
||||
/// at the first call to a query member function. You can call
|
||||
/// AABB_tree_with_join::build() explicitly to ensure that the next call to
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ namespace CGAL {
|
|||
|
||||
/*!
|
||||
* \class
|
||||
* The O(n^4) optimal strategy for decomposing a polygon into convex
|
||||
* The \cgalBigO{n^4} optimal strategy for decomposing a polygon into convex
|
||||
* sub-polygons.
|
||||
*/
|
||||
template <typename Kernel_,
|
||||
|
|
@ -39,7 +39,7 @@ public:
|
|||
|
||||
/*!
|
||||
* \class
|
||||
* Hertel and Mehlhorn's O(n) approximation strategy for decomposing a
|
||||
* Hertel and Mehlhorn's \cgalBigO{n} approximation strategy for decomposing a
|
||||
* polygon into convex sub-polygons.
|
||||
*/
|
||||
template <typename Kernel_,
|
||||
|
|
@ -56,7 +56,7 @@ public:
|
|||
|
||||
/*!
|
||||
* \class
|
||||
* Greene's O(n log(n)) approximation strategy for decomposing a polygon into
|
||||
* Greene's \cgalBigO{n log(n)} approximation strategy for decomposing a polygon into
|
||||
* convex sub-polygons.
|
||||
*/
|
||||
template <typename Kernel_,
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ quadratic number of pieces, which is worst-case optimal. Then up to
|
|||
\f$ m\f$ are the complexities of the two input polyhedra (the complexity of
|
||||
a `Nef_polyhedron_3` is the sum of its `Vertices`,
|
||||
`Halfedges` and `SHalfedges`). In total the operation runs in
|
||||
\f$ O(n^3m^3)\f$ time.
|
||||
\cgalBigO{n^3m^3} time.
|
||||
|
||||
Since the computation of the Minkowski sum takes quite some time, we
|
||||
give the running times of some Minkowski sum computations. They were
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ namespace CGAL {
|
|||
|
||||
The function `minkowski_sum_3()` computes the Minkowski sum of two
|
||||
given 3D Nef polyhedra \f$ N0\f$ and \f$ N1\f$. Note that the function runs in
|
||||
\f$ O(n^3m^3)\f$ time in the worst case, where \f$ n\f$ and
|
||||
\cgalBigO{n^3m^3} time in the worst case, where \f$ n\f$ and
|
||||
\f$ m\f$ are the complexities of the two input polyhedra (the complexity of
|
||||
a `Nef_polyhedron_3` is the sum of its `Vertices`,
|
||||
`Halfedges` and `SHalfedges`).
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ in the C++ standard. It has a default argument `CGAL_ALLOCATOR(T)`.
|
|||
|
||||
`Union_find<T,A>` is implemented with union by rank and path
|
||||
compression. The running time for \f$ m\f$ set operations on \f$ n\f$ elements
|
||||
is \f$ O(n \alpha(m,n))\f$ where \f$ \alpha(m,n)\f$ is the extremely slow growing
|
||||
is \cgalBigO{n \alpha(m,n)} where \f$ \alpha(m,n)\f$ is the extremely slow growing
|
||||
inverse of Ackermann's function.
|
||||
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ of type `Data` specified in the definition of `map`.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
`Unique_hash_map` is implemented via a chained hashing scheme. Access
|
||||
operations `map``[i]` take expected time \f$ O(1)\f$. The `table_size`
|
||||
operations `map``[i]` take expected time \cgalBigO{1}. The `table_size`
|
||||
parameter passed to chained hashing can be used to avoid unnecessary
|
||||
rehashing when set to the number of expected elements in the map.
|
||||
The design is derived from the \stl `hash_map` and the \leda type
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ global macro `CGAL_PROFILE`.
|
|||
|
||||
The class `Unique_hash_map` implements an injective mapping between
|
||||
a set of unique keys and a set of data values. This is implemented using
|
||||
a chained hashing scheme and access operations take \f$ O(1)\f$ expected time.
|
||||
a chained hashing scheme and access operations take \cgalBigO{1} expected time.
|
||||
Such a mapping is useful, for example, when keys are pointers,
|
||||
handles, iterators or circulators that refer to unique memory locations.
|
||||
In this case, the default hash function is `Handle_hash_function`.
|
||||
|
|
@ -57,7 +57,7 @@ In this case, the default hash function is `Handle_hash_function`.
|
|||
\cgal also provides a class `Union_find` that implements a partition
|
||||
of values into disjoint sets. This is implemented with union by rank and
|
||||
path compression. The running time for \f$ m\f$ set operations on \f$ n\f$ elements
|
||||
is \f$ O(n\alpha(m,n))\f$ where \f$ \alpha(m,n)\f$ is the extremely slowly growing
|
||||
is \cgalBigO{n\alpha(m,n)} where \f$ \alpha(m,n)\f$ is the extremely slowly growing
|
||||
inverse of Ackermann's function.
|
||||
|
||||
\section MiscellanyProtected Protected Access to Internal Representations
|
||||
|
|
|
|||
|
|
@ -33,14 +33,14 @@ Operations like `empty` take constant time. The operations
|
|||
`clear`, `complement`, `interior`, `closure`,
|
||||
`boundary`, `regularization`, input and output take linear
|
||||
time. All binary set operations and comparison operations take time
|
||||
\f$ O(n \log n)\f$ where \f$ n\f$ is the size of the output plus the size of the
|
||||
\cgalBigO{n \log n} where \f$ n\f$ is the size of the output plus the size of the
|
||||
input.
|
||||
|
||||
The point location and ray shooting operations are implemented in two
|
||||
flavors. The `NAIVE` operations run in linear query time without
|
||||
any preprocessing, the `DEFAULT` operations (equals `LMWT`)
|
||||
run in sub-linear query time, but preprocessing is triggered with the
|
||||
first operation. Preprocessing takes time \f$ O(N^2)\f$, the sub-linear
|
||||
first operation. Preprocessing takes time \cgalBigO{N^2}, the sub-linear
|
||||
point location time is either logarithmic when LEDA's persistent
|
||||
dictionaries are present or if not then the point location time is
|
||||
worst-case linear, but experiments show often sublinear runtimes. Ray
|
||||
|
|
@ -49,7 +49,7 @@ triangulation overlaid on the plane map representation. The cost of
|
|||
the walk is proportional to the number of triangles passed in
|
||||
direction `d` until an obstacle is met. In a minimum weight
|
||||
triangulation of the obstacles (the plane map representing the
|
||||
polyhedron) the theory provides a \f$ O(\sqrt{n})\f$ bound for the number
|
||||
polyhedron) the theory provides a \cgalBigO{\sqrt{n}} bound for the number
|
||||
of steps. Our locally minimum weight triangulation approximates the
|
||||
minimum weight triangulation only heuristically (the calculation of
|
||||
the minimum weight triangulation is conjectured to be NP hard). Thus
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ Operations like `empty` take constant time. The operations
|
|||
`clear`, `complement`, `interior`, `closure`,
|
||||
`boundary`, `regularization`, input and output take linear
|
||||
time. All binary set operations and comparison operations take time
|
||||
\f$ O(n \log n)\f$ where \f$ n\f$ is the size of the output plus the size of the
|
||||
\cgalBigO{n \log n} where \f$ n\f$ is the size of the output plus the size of the
|
||||
input.
|
||||
|
||||
The point location and ray shooting operations are implemented in the
|
||||
|
|
|
|||
|
|
@ -234,7 +234,7 @@ Because of its simplicity, an octree can be constructed faster than a kd-tree.
|
|||
|
||||
%Orthtree nodes are uniform, so orthtrees will tend to have deeper hierarchies than equivalent kd-trees.
|
||||
As a result, orthtrees will generally perform worse for nearest neighbor searches.
|
||||
Both nearest neighbor algorithms have a theoretical complexity of O(log(n)),
|
||||
Both nearest neighbor algorithms have a theoretical complexity of \cgalBigO{log(n)},
|
||||
but the orthtree can generally be expected to have a higher coefficient.
|
||||
|
||||
\cgalFigureBegin{Orthtree_nearest_neighbor_benchmark_fig, nearest_neighbor_benchmark.png}
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ type `std::iterator_traits<InputIterator>::%value_type` is defined.
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This function requires \f$ O(n)\f$ time for a polygon with \f$ n\f$ vertices.
|
||||
This function requires \cgalBigO{n} time for a polygon with \f$ n\f$ vertices.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ with the representation type determined by `std::iterator_traits<InputIterator1>
|
|||
This function implements the algorithm of Hertel and Mehlhorn
|
||||
\cgalCite{hm-ftsp-83} and is based on the class
|
||||
`Constrained_triangulation_2`. Given a triangulation of
|
||||
the polygon, the function requires \f$ O(n)\f$ time and
|
||||
the polygon, the function requires \cgalBigO{n} time and
|
||||
space for a polygon with \f$ n\f$ vertices.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
@ -116,7 +116,7 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function implements the approximation algorithm of
|
||||
Greene \cgalCite{g-dpcp-83} and requires \f$ O(n \log n)\f$ time and \f$ O(n)\f$ space
|
||||
Greene \cgalCite{g-dpcp-83} and requires \cgalBigO{n \log n} time and \cgalBigO{n} space
|
||||
to produce a convex partitioning given a \f$ y\f$-monotone partitioning of a
|
||||
polygon with \f$ n\f$ vertices. The function `y_monotone_partition_2()`
|
||||
is used to produce the monotone partition.
|
||||
|
|
@ -184,7 +184,7 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function implements the dynamic programming algorithm of Greene
|
||||
\cgalCite{g-dpcp-83}, which requires \f$ O(n^4)\f$ time and \f$ O(n^3)\f$ space to
|
||||
\cgalCite{g-dpcp-83}, which requires \cgalBigO{n^4} time and \cgalBigO{n^3} space to
|
||||
produce a partitioning of a polygon with \f$ n\f$ vertices.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
|
@ -254,8 +254,8 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
This function implements the algorithm presented by de Berg <I>et al.</I>
|
||||
\cgalCite{bkos-cgaa-97} which requires \f$ O(n \log n)\f$ time
|
||||
and \f$ O(n)\f$ space for a polygon with \f$ n\f$ vertices.
|
||||
\cgalCite{bkos-cgaa-97} which requires \cgalBigO{n \log n} time
|
||||
and \cgalBigO{n} space for a polygon with \f$ n\f$ vertices.
|
||||
|
||||
\cgalHeading{Example}
|
||||
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
|
||||
This function calls `partition_is_valid_2()` using the function object
|
||||
`Is_convex_2` to determine the convexity of each partition polygon.
|
||||
Thus the time required by this function is \f$ O(n \log n + e \log e)\f$ where
|
||||
Thus the time required by this function is \cgalBigO{n \log n + e \log e} where
|
||||
\f$ n\f$ is the total number of vertices in the partition polygons and \f$ e\f$ the
|
||||
total number of edges.
|
||||
|
||||
|
|
@ -103,7 +103,7 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This function requires \f$ O(n \log n + e \log e + \Sigma_{i=1}^p m_i)\f$ where \f$ n\f$
|
||||
This function requires \cgalBigO{n \log n + e \log e + \Sigma_{i=1}^p m_i} where \f$ n\f$
|
||||
is the total number of vertices of the \f$ p\f$ partition polygons, \f$ e\f$ is the
|
||||
total number of edges of the partition polygons and \f$ m_i\f$ is the time required
|
||||
by `Traits::Is_valid()` to test if partition polygon \f$ p_i\f$ is valid.
|
||||
|
|
@ -161,7 +161,7 @@ with the representation type determined by `std::iterator_traits<InputIterator>:
|
|||
|
||||
This function uses the function `partition_is_valid_2()` together with
|
||||
the function object `Is_y_monotone_2` to determine if each polygon
|
||||
is \f$ y\f$-monotone or not. Thus the time required is \f$ O(n \log n + e \log e)\f$
|
||||
is \f$ y\f$-monotone or not. Thus the time required is \cgalBigO{n \log n + e \log e}
|
||||
where \f$ n\f$ is the total number of vertices of the partition polygons and
|
||||
\f$ e\f$ is the total number of edges.
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ a convex polygon or not.
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This test requires \f$ O(n)\f$ time for a polygon with \f$ n\f$ vertices.
|
||||
This test requires \cgalBigO{n} time for a polygon with \f$ n\f$ vertices.
|
||||
|
||||
*/
|
||||
template< typename Traits >
|
||||
|
|
@ -62,7 +62,7 @@ Function object class that indicates all sequences of points are valid.
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This test requires \f$ O(1)\f$ time.
|
||||
This test requires \cgalBigO{1} time.
|
||||
|
||||
*/
|
||||
template< typename Traits >
|
||||
|
|
@ -110,7 +110,7 @@ a \f$ y\f$-monotone polygon or not.
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
This test requires \f$ O(n)\f$ time for a polygon with \f$ n\f$ vertices.
|
||||
This test requires \cgalBigO{n} time for a polygon with \f$ n\f$ vertices.
|
||||
|
||||
*/
|
||||
template< typename Traits >
|
||||
|
|
|
|||
|
|
@ -31,8 +31,8 @@ Functions are available for partitioning planar polygons into two
|
|||
types of subpolygons (`y`-monotone polygons and convex polygons).
|
||||
|
||||
The function that produces a `y`-monotone partitioning is based on the
|
||||
algorithm presented in \cgalCite{bkos-cgaa-97} which requires \f$ O(n \log n) \f$ time
|
||||
and \f$ O(n) \f$ space for a polygon with \f$ n \f$ vertices and guarantees nothing
|
||||
algorithm presented in \cgalCite{bkos-cgaa-97} which requires \cgalBigO{n \log n} time
|
||||
and \cgalBigO{n} space for a polygon with \f$ n \f$ vertices and guarantees nothing
|
||||
about the number of polygons produced with respect to the optimal number
|
||||
Three functions are provided for producing
|
||||
convex partitions. Two of these functions produce approximately optimal
|
||||
|
|
@ -41,12 +41,12 @@ defined in terms of the number of partition polygons. The two functions
|
|||
that implement approximation algorithms are guaranteed to produce no more
|
||||
than four times the optimal number of convex pieces. The optimal partitioning
|
||||
function provides an implementation of Greene's dynamic programming algorithm
|
||||
\cgalCite{g-dpcp-83}, which requires \f$ O(n^4) \f$ time and \f$ O(n^3) \f$ space to produce a
|
||||
\cgalCite{g-dpcp-83}, which requires \cgalBigO{n^4} time and \cgalBigO{n^3} space to produce a
|
||||
convex partitioning. One of the approximation algorithms is also due to
|
||||
Greene \cgalCite{g-dpcp-83} and requires \f$ O(n \log n) \f$ time and \f$ O(n) \f$ space
|
||||
Greene \cgalCite{g-dpcp-83} and requires \cgalBigO{n \log n} time and \cgalBigO{n} space
|
||||
to produce a convex partitioning given a `y`-monotone partitioning. The
|
||||
other approximation algorithm is a result of Hertel and
|
||||
Mehlhorn \cgalCite{hm-ftsp-83}, which requires \f$ O(n) \f$ time and space to produce
|
||||
Mehlhorn \cgalCite{hm-ftsp-83}, which requires \cgalBigO{n} time and space to produce
|
||||
a convex partitioning from a triangulation of a polygon.
|
||||
Each of the partitioning functions uses a traits class to supply the
|
||||
primitive types and predicates used by the algorithms.
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ is a polygon whose vertices \f$ v_1, \ldots, v_n\f$ can be divided into two chai
|
|||
intersects either chain at most once. For producing a \f$ y\f$-monotone partition
|
||||
of a given polygon, the sweep-line algorithm
|
||||
presented in \cgalCite{bkos-cgaa-97} is implemented by the function
|
||||
`y_monotone_partition_2()`. This algorithm runs in \f$ O(n \log n)\f$ time and requires \f$ O(n)\f$ space.
|
||||
`y_monotone_partition_2()`. This algorithm runs in \cgalBigO{n \log n} time and requires \cgalBigO{n} space.
|
||||
This algorithm does not guarantee a bound on the number of polygons
|
||||
produced with respect to the optimal number.
|
||||
|
||||
|
|
@ -72,7 +72,7 @@ An optimal convex partition can be produced using the function `optimal_convex_p
|
|||
This function provides an
|
||||
implementation of Greene's dynamic programming algorithm for optimal
|
||||
partitioning \cgalCite{g-dpcp-83}.
|
||||
This algorithm requires \f$ O(n^4)\f$ time and \f$ O(n^3)\f$ space in the worst case.
|
||||
This algorithm requires \cgalBigO{n^4} time and \cgalBigO{n^3} space in the worst case.
|
||||
|
||||
The function `approx_convex_partition_2()` implements the simple approximation
|
||||
algorithm of Hertel and Mehlhorn \cgalCite{hm-ftsp-83} that
|
||||
|
|
@ -81,12 +81,12 @@ throwing out unnecessary triangulation edges.
|
|||
The triangulation used in this function is one produced by the
|
||||
2-dimensional constrained triangulation
|
||||
package of \cgal. For a given triangulation, this convex partitioning
|
||||
algorithm requires \f$ O(n)\f$ time and space to construct a decomposition into
|
||||
algorithm requires \cgalBigO{n} time and space to construct a decomposition into
|
||||
no more than four times the optimal number of convex pieces.
|
||||
|
||||
The sweep-line approximation algorithm of Greene \cgalCite{g-dpcp-83}, which,
|
||||
given a monotone partition of a polygon, produces a convex partition in
|
||||
\f$ O(n \log n)\f$ time and \f$ O(n)\f$ space, is implemented
|
||||
\cgalBigO{n \log n} time and \cgalBigO{n} space, is implemented
|
||||
by the function `greene_approx_convex_partition_2()`. The function
|
||||
`y_monotone_partition_2()` described in
|
||||
Section \ref secpartition_2_monotone is used to produce the monotone
|
||||
|
|
|
|||
|
|
@ -28,19 +28,19 @@ CGAL::Triangulation_data_structure_2<
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
Insertion is implemented by inserting in the triangulation, then
|
||||
performing a sequence of Delaunay flips. The number of flips is \f$ O(d)\f$
|
||||
performing a sequence of Delaunay flips. The number of flips is \cgalBigO{d}
|
||||
if the new vertex is of degree \f$ d\f$ in the new triangulation. For
|
||||
points distributed uniformly at random, insertion takes time \f$ O(1)\f$ on
|
||||
points distributed uniformly at random, insertion takes time \cgalBigO{1} on
|
||||
average.
|
||||
|
||||
Removal calls the removal in the triangulation and then
|
||||
re-triangulates the hole in such a way that the Delaunay criterion is
|
||||
satisfied. Removal of a vertex of degree \f$ d\f$ takes time \f$ O(d^2)\f$. The
|
||||
expected degree \f$ d\f$ is \f$ O(1)\f$ for a random vertex in the
|
||||
satisfied. Removal of a vertex of degree \f$ d\f$ takes time \cgalBigO{d^2}. The
|
||||
expected degree \f$ d\f$ is \cgalBigO{1} for a random vertex in the
|
||||
triangulation.
|
||||
|
||||
After a point location step, the nearest neighbor is found in time
|
||||
\f$ O(n)\f$ in the worst case, but in expected time \f$ O(1)\f$ on average for
|
||||
\cgalBigO{n} in the worst case, but in expected time \cgalBigO{1} on average for
|
||||
vertices distributed uniformly at random and any query point.
|
||||
|
||||
\sa `CGAL::Periodic_2_triangulation_2<Traits,Tds>`
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ optional parameter is given).
|
|||
|
||||
Insertion of a point is done by locating a face that contains the
|
||||
point, and then splitting this face. Apart from the location,
|
||||
insertion takes a time \f$ O(1)\f$.
|
||||
insertion takes a time \cgalBigO{1}.
|
||||
|
||||
Removal of a vertex is more difficult than in the Euclidean space,
|
||||
since the star of a vertex may not be disjoint from the star of a
|
||||
|
|
|
|||
|
|
@ -231,7 +231,7 @@ bool is_convex_2(ForwardIterator first,
|
|||
///
|
||||
/// The simplicity test is implemented by means of a plane sweep algorithm.
|
||||
/// The algorithm is quite robust when used with inexact number types.
|
||||
/// The running time is `O(n log n)`, where n is the number of vertices of the
|
||||
/// The running time is \cgalBigO{n log n}, where n is the number of vertices of the
|
||||
/// polygon.
|
||||
///
|
||||
/// \sa `PolygonTraits_2`
|
||||
|
|
|
|||
|
|
@ -543,8 +543,8 @@ This can allow a user to stop the algorithm if a timeout needs to be implemented
|
|||
|
||||
The hole filling algorithm has a complexity which depends on the
|
||||
number of vertices. While \cgalCite{liepa2003filling} has a running
|
||||
time of \f$ O(n^3)\f$ , \cgalCite{zou2013algorithm} in most cases has
|
||||
running time of \f$ O(n \log n)\f$. We benchmarked the function
|
||||
time of \cgalBigO{n^3} , \cgalCite{zou2013algorithm} in most cases has
|
||||
running time of \cgalBigO{n \log n}. We benchmarked the function
|
||||
`triangulate_refine_and_fair_hole()` for the two meshes below (as well as two
|
||||
more meshes with smaller holes). The machine used was a PC running
|
||||
Windows 10 with an Intel Core i7 CPU clocked at 2.70 GHz.
|
||||
|
|
|
|||
|
|
@ -351,7 +351,7 @@ Similarly, if the solver knows that the program is nonnegative, it
|
|||
will be more efficient than under the general bounds
|
||||
\f$ \qpl\leq \qpx \leq \qpu\f$.
|
||||
You can argue that nonnegativity <I>is</I> something that could easily
|
||||
be checked in time \f$ O(n)\f$ beforehand, but then again nonnegative
|
||||
be checked in time \cgalBigO{n} beforehand, but then again nonnegative
|
||||
programs are so frequent that the syntactic sugar aspect becomes
|
||||
somewhat important. After all, we can save four iterators in
|
||||
specifying a nonnegative linear program in terms of the concept
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ container. The iterator does not have constant amortized time complexity for
|
|||
the increment and decrement operations in all cases, only when not too many
|
||||
elements have not been freed (i.e.\ when the `size()` is close to the
|
||||
`capacity()`). Iterating from `begin()` to `end()` takes
|
||||
`O(capacity())` time, not `size()`. In the case where the container
|
||||
\cgalBigO{capacity()} time, not `size()`. In the case where the container
|
||||
has a small `size()` compared to its `capacity()`, we advise to
|
||||
"defragment the memory" by copying the container if the iterator performance
|
||||
is needed.
|
||||
|
|
@ -661,7 +661,7 @@ void clear();
|
|||
|
||||
/// \name Ownership testing
|
||||
/// The following functions are mostly helpful for efficient debugging, since
|
||||
/// their complexity is \f$ O(\sqrt{\mathrm{c.capacity()}})\f$.
|
||||
/// their complexity is \cgalBigO{\sqrt{\mathrm{c.capacity()}}}.
|
||||
/// @{
|
||||
|
||||
/*!
|
||||
|
|
@ -681,7 +681,7 @@ bool owns_dereferenceable(const_iterator pos);
|
|||
/// @{
|
||||
/*!
|
||||
adds the items of `cc2` to the end of `cc` and `cc2` becomes empty.
|
||||
The time complexity is O(`cc`.`capacity()`-`cc`.`size()`).
|
||||
The time complexity is \cgalBigO{cc.capacity()-cc.size()}.
|
||||
\pre `cc2` must not be the same as `cc`, and the allocators of `cc` and `cc2` must be compatible: `cc.get_allocator() == cc2.get_allocator()`.
|
||||
*/
|
||||
void merge(Compact_container<T, Allocator> &cc);
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ container. The iterator does not have constant amortized time complexity for
|
|||
the increment and decrement operations in all cases, only when not too many
|
||||
elements have not been freed (i.e.\ when the `size()` is close to the
|
||||
`capacity()`). Iterating from `begin()` to `end()` takes
|
||||
`O(capacity())` time, not `size()`. In the case where the container
|
||||
\cgalBigO{capacity()} time, not `size()`. In the case where the container
|
||||
has a small `size()` compared to its `capacity()`, we advise to
|
||||
\"defragment the memory\" by copying the container if the iterator performance
|
||||
is needed.
|
||||
|
|
@ -289,7 +289,7 @@ complexity. No exception is thrown.
|
|||
|
||||
/// \name Ownership testing
|
||||
/// The following functions are mostly helpful for efficient debugging, since
|
||||
/// their complexity is \f$ O(\sqrt{\mathrm{c.capacity()}})\f$.
|
||||
/// their complexity is \cgalBigO{\sqrt{\mathrm{c.capacity()}}}.
|
||||
/// @{
|
||||
/// returns whether `pos` is in the range `[ccc.begin(), ccc.end()]` (`ccc.end()` included).
|
||||
bool owns(const_iterator pos);
|
||||
|
|
@ -302,7 +302,7 @@ complexity. No exception is thrown.
|
|||
/// @{
|
||||
/*!
|
||||
adds the items of `ccc2` to the end of `ccc` and `ccc2` becomes empty.
|
||||
The time complexity is O(`ccc`.`capacity()`-`ccc`.`size()`).
|
||||
The time complexity is \cgalBigO{ccc.capacity()-ccc.size()}.
|
||||
\pre `ccc2` must not be the same as `ccc`, and the allocators of `ccc` and `ccc2` must be compatible: `ccc.get_allocator() == ccc2.get_allocator()`.
|
||||
*/
|
||||
void merge(Concurrent_compact_container<T, Allocator> &ccc2);
|
||||
|
|
|
|||
|
|
@ -749,7 +749,7 @@ void reverse();
|
|||
/// @{
|
||||
/*!
|
||||
sorts the list `ipl` according to the
|
||||
`operator<` in time \f$ O(n \log n)\f$ where `n = size()`.
|
||||
`operator<` in time \cgalBigO{n \log n} where `n = size()`.
|
||||
It is stable.
|
||||
\pre a suitable `operator<` for the type `T`.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -73,12 +73,12 @@ less-than operator (`operator<`).
|
|||
|
||||
`Multiset` uses a proprietary implementation of a red-black tree
|
||||
data-structure. The red-black tree invariants guarantee that the height of a
|
||||
tree containing \f$ n\f$ elements is \f$ O(\log{n})\f$ (more precisely, it is bounded by
|
||||
tree containing \f$ n\f$ elements is \cgalBigO{\log{n}} (more precisely, it is bounded by
|
||||
\f$ 2 \log_{2}{n}\f$). As a consequence, all methods that accept an element and need
|
||||
to locate it in the tree (namely `insert(x)`, `erase(x)`,
|
||||
`find(x)`, `count(x)`, `lower_bound(x)` , `upper_bound(x)`,
|
||||
`find_lower(x)` and `equal_range(x)`) take \f$ O(\log{n})\f$ time and
|
||||
perform \f$ O(\log{n})\f$ comparison operations.
|
||||
`find_lower(x)` and `equal_range(x)`) take \cgalBigO{\log{n}} time and
|
||||
perform \cgalBigO{\log{n}} comparison operations.
|
||||
|
||||
On the other hand, the set operations that accept a position iterator (namely
|
||||
`insert_before(pos, x)`, `insert_after(pos, x)` and `erase(pos)`)
|
||||
|
|
@ -87,12 +87,12 @@ cost (see \cgalCite{gs-dfbt-78} and \cgalCite{t-dsna-83} for more details).
|
|||
More important, these set operations require <I>no</I> comparison operations.
|
||||
Therefore, it is highly recommended to maintain the set via iterators
|
||||
to the stored elements, whenever possible. The function `insert(pos, x)`
|
||||
is safer to use, but it takes amortized \f$ O(\min\{d,\log{n}\})\f$ time, where \f$ d\f$
|
||||
is safer to use, but it takes amortized \cgalBigO{\min\{d,\log{n}\}} time, where \f$ d\f$
|
||||
is the distance between the given position and the true position of `x`.
|
||||
In addition, it always performs at least two comparison operations.
|
||||
|
||||
The `catenate()` and `split()` functions are also very efficient, and
|
||||
can be performed in \f$ O(\log{n})\f$ time, where \f$ n\f$ is the total number of
|
||||
can be performed in \cgalBigO{\log{n}} time, where \f$ n\f$ is the total number of
|
||||
elements in the sets, and without performing any comparison operations
|
||||
(see \cgalCite{t-dsna-83} for the details).
|
||||
Note however that the size of two sets resulting from a split operation is
|
||||
|
|
|
|||
|
|
@ -544,25 +544,25 @@ public:
|
|||
//@{
|
||||
|
||||
/*!
|
||||
* Default constructor. [takes O(1) operations]
|
||||
* Default constructor. [takes \cgalBigO{1} operations]
|
||||
*/
|
||||
Multiset ();
|
||||
|
||||
/*!
|
||||
* Constructor with a comparison object. [takes O(1) operations]
|
||||
* Constructor with a comparison object. [takes \cgalBigO{1} operations]
|
||||
* \param comp A comparison object to be used by the tree.
|
||||
*/
|
||||
Multiset (const Compare& comp);
|
||||
|
||||
/*!
|
||||
* Copy constructor. [takes O(n) operations]
|
||||
* Copy constructor. [takes \cgalBigO{n} operations]
|
||||
* \param tree The copied tree.
|
||||
*/
|
||||
Multiset (const Self& tree);
|
||||
|
||||
/*!
|
||||
* Construct a tree that contains all objects in the given range.
|
||||
* [takes O(n log n) operations]
|
||||
* [takes \cgalBigO{n log n} operations]
|
||||
* \param first An iterator for the first object in the range.
|
||||
* \param last A past-the-end iterator for the range.
|
||||
*/
|
||||
|
|
@ -587,18 +587,18 @@ public:
|
|||
}
|
||||
|
||||
/*!
|
||||
* Destructor. [takes O(n) operations]
|
||||
* Destructor. [takes \cgalBigO{n} operations]
|
||||
*/
|
||||
virtual ~Multiset () noexcept(!CGAL_ASSERTIONS_ENABLED);
|
||||
|
||||
/*!
|
||||
* Assignment operator. [takes O(n) operations]
|
||||
* Assignment operator. [takes \cgalBigO{n} operations]
|
||||
* \param tree The copied tree.
|
||||
*/
|
||||
Self& operator= (const Self& tree);
|
||||
|
||||
/*!
|
||||
* Swap two trees. [takes O(1) operations]
|
||||
* Swap two trees. [takes \cgalBigO{1} operations]
|
||||
* \param tree The copied tree.
|
||||
*/
|
||||
void swap (Self& tree);
|
||||
|
|
@ -608,13 +608,13 @@ public:
|
|||
//@{
|
||||
|
||||
/*!
|
||||
* Test two trees for equality. [takes O(n) operations]
|
||||
* Test two trees for equality. [takes \cgalBigO{n} operations]
|
||||
* \param tree The compared tree.
|
||||
*/
|
||||
bool operator== (const Self& tree) const;
|
||||
|
||||
/*!
|
||||
* Check if our tree is lexicographically smaller. [takes O(n) operations]
|
||||
* Check if our tree is lexicographically smaller. [takes \cgalBigO{n} operations]
|
||||
* \param tree The compared tree.
|
||||
*/
|
||||
bool operator< (const Self& tree) const;
|
||||
|
|
@ -707,8 +707,8 @@ public:
|
|||
}
|
||||
|
||||
/*!
|
||||
* Get the size of the tree. [takes O(1) operations, unless the tree
|
||||
* was involved in a split operation, then it may take O(n) time.]
|
||||
* Get the size of the tree. [takes \cgalBigO{1} operations, unless the tree
|
||||
* was involved in a split operation, then it may take \cgalBigO{n} time.]
|
||||
* \return The number of objects stored in the tree.
|
||||
*/
|
||||
size_t size () const;
|
||||
|
|
@ -725,14 +725,14 @@ public:
|
|||
/// \name Insertion functions.
|
||||
|
||||
/*!
|
||||
* Insert an object into the tree. [takes O(log n) operations]
|
||||
* Insert an object into the tree. [takes \cgalBigO{log n} operations]
|
||||
* \param object The object to be inserted.
|
||||
* \return An iterator pointing to the inserted object.
|
||||
*/
|
||||
iterator insert (const Type& object);
|
||||
|
||||
/*!
|
||||
* Insert a range of k objects into the tree. [takes O(k log n) operations]
|
||||
* Insert a range of k objects into the tree. [takes \cgalBigO{k log n} operations]
|
||||
* \param first An iterator for the first object in the range.
|
||||
* \param last A past-the-end iterator for the range.
|
||||
*/
|
||||
|
|
@ -751,7 +751,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Insert an object to the tree, with a given hint to its position.
|
||||
* [takes O(log n) operations at worst-case, but only O(1) amortized]
|
||||
* [takes \cgalBigO{log n} operations at worst-case, but only \cgalBigO{1} amortized]
|
||||
* \param position A hint for the position of the object.
|
||||
* \param object The object to be inserted.
|
||||
* \return An iterator pointing to the inserted object.
|
||||
|
|
@ -761,7 +761,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Insert an object to the tree, as the successor the given object.
|
||||
* [takes O(log n) operations at worst-case, but only O(1) amortized]
|
||||
* [takes \cgalBigO{log n} operations at worst-case, but only \cgalBigO{1} amortized]
|
||||
* \param position Points to the object after which the new object should
|
||||
* be inserted (or an invalid iterator to insert the object
|
||||
* as the tree minimum).
|
||||
|
|
@ -774,7 +774,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Insert an object to the tree, as the predecessor the given object.
|
||||
* [takes O(log n) operations at worst-case, but only O(1) amortized]
|
||||
* [takes \cgalBigO{log n} operations at worst-case, but only \cgalBigO{1} amortized]
|
||||
* \param position Points to the object before which the new object should
|
||||
* be inserted (or an invalid iterator to insert the object
|
||||
* as the tree maximum).
|
||||
|
|
@ -789,7 +789,7 @@ public:
|
|||
//@{
|
||||
|
||||
/*!
|
||||
* Erase objects from the tree. [takes O(log n) operations]
|
||||
* Erase objects from the tree. [takes \cgalBigO{log n} operations]
|
||||
* \param object The object to be removed.
|
||||
* \return The number of objects removed from the tree.
|
||||
* Note that all iterators to the erased objects become invalid.
|
||||
|
|
@ -798,7 +798,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Remove the object pointed by the given iterator.
|
||||
* [takes O(log n) operations at worst-case, but only O(1) amortized]
|
||||
* [takes \cgalBigO{log n} operations at worst-case, but only \cgalBigO{1} amortized]
|
||||
* \param position An iterator pointing the object to be erased.
|
||||
* \pre The iterator must be a valid.
|
||||
* Note that all iterators to the erased object become invalid.
|
||||
|
|
@ -806,7 +806,7 @@ public:
|
|||
void erase (iterator position);
|
||||
|
||||
/*!
|
||||
* Clear the contents of the tree. [takes O(n) operations]
|
||||
* Clear the contents of the tree. [takes \cgalBigO{n} operations]
|
||||
*/
|
||||
void clear ();
|
||||
|
||||
|
|
@ -817,7 +817,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Search the tree for the given key (non-const version).
|
||||
* [takes O(log n) operations]
|
||||
* [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return A iterator pointing to the first equivalent object in the tree,
|
||||
|
|
@ -843,7 +843,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Search the tree for the given key (const version).
|
||||
* [takes O(log n) operations]
|
||||
* [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return A iterator pointing to the first equivalent object in the tree,
|
||||
|
|
@ -869,7 +869,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Count the number of object in the tree equivalent to a given key.
|
||||
* [takes O(log n + d) operations]
|
||||
* [takes \cgalBigO{log n + d} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The number of equivalent objects.
|
||||
|
|
@ -905,7 +905,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is not less than a given key
|
||||
* (non-const version). [takes O(log n) operations]
|
||||
* (non-const version). [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The lower bound of the key, or end() if the key is not found
|
||||
|
|
@ -931,7 +931,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is not less than a given key
|
||||
* (non-const version). [takes O(log n) operations]
|
||||
* (non-const version). [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The lower bound of the key, along with a flag indicating whether
|
||||
|
|
@ -957,7 +957,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is greater than a given key
|
||||
* (non-const version). [takes O(log n) operations]
|
||||
* (non-const version). [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The upper bound of the key, or end() if the key is not found
|
||||
|
|
@ -983,7 +983,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is not less than a given key
|
||||
* (const version). [takes O(log n) operations]
|
||||
* (const version). [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The lower bound of the key, or end() if the key is not found
|
||||
|
|
@ -1009,7 +1009,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is not less than a given key
|
||||
* (const version). [takes O(log n) operations]
|
||||
* (const version). [takes \cgalBigO{log n} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return The lower bound of the key, along with a flag indicating whether
|
||||
|
|
@ -1035,7 +1035,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the first element whose key is greater than a given key
|
||||
* (const version). [takes O(log n) operations]
|
||||
* (const version). [takes \cgalBigO{log n} operations]
|
||||
* \param object The query object.
|
||||
* \return The upper bound of the key, or end() if the key is not found
|
||||
* in the tree.
|
||||
|
|
@ -1060,7 +1060,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the range of objects in the tree that are equivalent to a given key
|
||||
* (non-const version). [takes O(log n + d) operations]
|
||||
* (non-const version). [takes \cgalBigO{log n + d} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return A pair of (lower_bound(key), upper_bound(key)).
|
||||
|
|
@ -1108,7 +1108,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Get the range of objects in the tree that are equivalent to a given key
|
||||
* (const version). [takes O(log n + d) operations]
|
||||
* (const version). [takes \cgalBigO{log n + d} operations]
|
||||
* \param key The query key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \return A pair of (lower_bound(key), upper_bound(key)).
|
||||
|
|
@ -1163,7 +1163,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Replace the object pointed by a given iterator with another object.
|
||||
* [takes O(1) operations]
|
||||
* [takes \cgalBigO{1} operations]
|
||||
* \param position An iterator pointing the object to be replaced.
|
||||
* \param object The new object.
|
||||
* \pre The given iterator is valid.
|
||||
|
|
@ -1174,7 +1174,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Swap the location two objects in the tree, given by their positions.
|
||||
* [takes O(1) operations]
|
||||
* [takes \cgalBigO{1} operations]
|
||||
* \param pos1 An iterator pointing to the first object.
|
||||
* \param pos1 An iterator pointing to the second object.
|
||||
* \pre The two iterators are valid.
|
||||
|
|
@ -1184,7 +1184,7 @@ public:
|
|||
|
||||
/*!
|
||||
* Catenate the tree with a given tree, whose minimal object is not less
|
||||
* than the maximal object of this tree. [takes O(log n) operations]
|
||||
* than the maximal object of this tree. [takes \cgalBigO{log n} operations]
|
||||
* The function clears the other given tree, but all its iterators remain
|
||||
* valid and can be used with the catenated tree.
|
||||
* \param tree The tree to catenate to out tree.
|
||||
|
|
@ -1196,7 +1196,7 @@ public:
|
|||
/*!
|
||||
* Split the tree such that all remaining objects are less than a given
|
||||
* key, and all objects greater than (or equal to) this key form
|
||||
* a new output tree. [takes O(log n) operations]
|
||||
* a new output tree. [takes \cgalBigO{log n} operations]
|
||||
* \param key The split key.
|
||||
* \param comp_key A comparison functor for comparing keys and objects.
|
||||
* \param tree Output: The tree that will eventually contain all objects
|
||||
|
|
@ -1220,7 +1220,7 @@ public:
|
|||
/*!
|
||||
* Split the tree at a given position, such that it contains all objects
|
||||
* in the range [begin, position) and all objects in the range
|
||||
* [position, end) form a new output tree. [takes O(log n) operations]
|
||||
* [position, end) form a new output tree. [takes \cgalBigO{log n} operations]
|
||||
* \param position An iterator pointing at the split position.
|
||||
* \param tree Output: The output tree.
|
||||
* \pre The output tree is initially empty.
|
||||
|
|
@ -1240,13 +1240,13 @@ public:
|
|||
bool is_valid() const;
|
||||
|
||||
/*!
|
||||
* Get the height of the tree. [takes O(n) operations]
|
||||
* Get the height of the tree. [takes \cgalBigO{n} operations]
|
||||
* \return The length of the longest path from the root to a leaf node.
|
||||
*/
|
||||
size_t height () const;
|
||||
|
||||
/*!
|
||||
* Get the black-height of the tree. [takes O(1) operations]
|
||||
* Get the black-height of the tree. [takes \cgalBigO{1} operations]
|
||||
* \return The number of black nodes from the root to each leaf node.
|
||||
*/
|
||||
inline size_t black_height () const
|
||||
|
|
|
|||
|
|
@ -9,11 +9,11 @@ points that lie inside a given \f$ d\f$-dimensional interval.
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
The construction of a \f$ d\f$-dimensional range tree takes \f$ {O}(n\log n^{d-1})\f$
|
||||
The construction of a \f$ d\f$-dimensional range tree takes \cgalBigO{n\log n^{d-1}}
|
||||
time. The points in
|
||||
the query window are reported in time \f$ {O}(k+{\log}^d n )\f$, where \f$ k\f$
|
||||
the query window are reported in time \cgalBigO{k+{\log}^d n }, where \f$ k\f$
|
||||
is the number of reported points.
|
||||
The tree uses \f$ {O}(n\log n^{d-1})\f$ storage.
|
||||
The tree uses \cgalBigO{n\log n^{d-1}} storage.
|
||||
|
||||
*/
|
||||
template< typename Data, typename Window, typename Traits >
|
||||
|
|
|
|||
|
|
@ -8,10 +8,10 @@ namespace CGAL {
|
|||
|
||||
\cgalHeading{Implementation}
|
||||
|
||||
A \f$ d\f$-dimensional segment tree is constructed in \f$ {O}(n\log n^d)\f$ time.
|
||||
An inverse range query is performed in time \f$ {O}(k+{\log}^d n )\f$, where \f$ k\f$
|
||||
A \f$ d\f$-dimensional segment tree is constructed in \cgalBigO{n\log n^d} time.
|
||||
An inverse range query is performed in time \cgalBigO{k+{\log}^d n }, where \f$ k\f$
|
||||
is the number of reported intervals.
|
||||
The tree uses \f$ {O}(n\log n^d)\f$ storage.
|
||||
The tree uses \cgalBigO{n\log n^d} storage.
|
||||
|
||||
*/
|
||||
template< typename Data, typename Window, typename Traits >
|
||||
|
|
|
|||
|
|
@ -299,9 +299,9 @@ The 2-dimensional tree is a binary search tree on the first dimension. Each subl
|
|||
For the d-dimensional range tree, the figure shows one sublayer tree for each
|
||||
layer of the tree.
|
||||
|
||||
The tree can be built in \f$ O(n\log^{d-1} n)\f$ time and
|
||||
needs \f$ O(n\log^{d-1} n)\f$ space. The ` d`-dimensional points that lie in the
|
||||
` d`-dimensional query interval can be reported in \f$ O(\log^dn+k)\f$ time,
|
||||
The tree can be built in \cgalBigO{n\log^{d-1} n} time and
|
||||
needs \cgalBigO{n\log^{d-1} n} space. The ` d`-dimensional points that lie in the
|
||||
` d`-dimensional query interval can be reported in \cgalBigO{\log^dn+k} time,
|
||||
where ` n` is the total number of points and ` k` is the number of
|
||||
reported points.
|
||||
|
||||
|
|
@ -437,11 +437,11 @@ sublayer tree of a vertex `v` is a segment tree according to
|
|||
the second dimension of all data items of `v`.
|
||||
|
||||
|
||||
The tree can be built in \f$ O(n\log^{d} n)\f$ time and
|
||||
needs \f$ O(n\log^{d} n)\f$ space.
|
||||
The tree can be built in \cgalBigO{n\log^{d} n} time and
|
||||
needs \cgalBigO{n\log^{d} n} space.
|
||||
The processing time for inverse range
|
||||
queries in an ` d`-dimensional segment tree is \f$ O(\log^d n
|
||||
+k)\f$ time, where ` n` is the total number of intervals and ` k` is
|
||||
queries in an ` d`-dimensional segment tree is \cgalBigO{\log^d n
|
||||
+k} time, where ` n` is the total number of intervals and ` k` is
|
||||
the number of reported intervals.
|
||||
|
||||
One possible application of a two-dimensional segment tree is the
|
||||
|
|
|
|||
|
|
@ -276,7 +276,7 @@ homogeneous coordinates of bit size at most \f$ 3b+O(1)\f$.
|
|||
The supporting lines of the segments (they are needed in some of
|
||||
the predicates) have coefficients which are always of bit size
|
||||
\f$ 2b+O(1)\f$. As a result, the bit size of the expressions involved in
|
||||
our predicates will always be \f$ O(b)\f$, independently of the
|
||||
our predicates will always be \cgalBigO{b}, independently of the
|
||||
size of the input.
|
||||
The `SegmentDelaunayGraphSite_2` concept encapsulates the ideas
|
||||
presented above. A site is represented in this concept by up to four
|
||||
|
|
@ -348,7 +348,7 @@ intersecting sites represented in homogeneous coordinates of bit size
|
|||
\f$ b\f$, the maximum bit size of the algebraic expressions involved in the
|
||||
predicates is \f$ 40 b+O(1)\f$. Given our site representation given above we
|
||||
can guarantee that even in the case of strongly intersecting sites,
|
||||
the algebraic degree of the predicates remains \f$ O(b)\f$, independently
|
||||
the algebraic degree of the predicates remains \cgalBigO{b}, independently
|
||||
of the size of the input. What we want to focus in the remainder of
|
||||
this section are the different kinds of filtering techniques that we
|
||||
have employed in our implementation.
|
||||
|
|
|
|||
|
|
@ -82,8 +82,8 @@ ill-condition.
|
|||
|
||||
The implementation is based on an algorithm developed by Shamai and
|
||||
Halperin; see \cgalCite{cgal:ss-spfis-16} for the generalization of
|
||||
the algorithm to 3D. The time and space complexities are in \f$O(n)\f$
|
||||
and \f$O(1)\f$, respectively. In order to ensure robustness and
|
||||
the algorithm to 3D. The time and space complexities are in \cgalBigO{n}
|
||||
and \cgalBigO{1}, respectively. In order to ensure robustness and
|
||||
correctness you must use a kernel that guarantees exact
|
||||
constructions as well as exact predicates, e,g,.
|
||||
`Exact_predicates_exact_constructions_kernel`.
|
||||
|
|
|
|||
|
|
@ -202,8 +202,8 @@ public:
|
|||
/// Read access to a matrix coefficient.
|
||||
///
|
||||
/// \warning Complexity:
|
||||
/// - O(log(n)) if the matrix is already built.
|
||||
/// - O(n) if the matrix is not built.
|
||||
/// - \cgalBigO{log(n)} if the matrix is already built.
|
||||
/// - \cgalBigO{n} if the matrix is not built.
|
||||
/// `n` being the number of entries in the matrix.
|
||||
///
|
||||
/// \pre 0 <= i < row_dimension().
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ public:
|
|||
* T2 should be constructable by T1
|
||||
*
|
||||
* Implementation note: it is a variant of Floyd generator, and has uniform distribution
|
||||
* where k = number of centers = complexity is O(k log k), and mem overhead is O(k)
|
||||
* where k = number of centers = complexity is \cgalBigO{k log k}, and mem overhead is \cgalBigO{k}
|
||||
*
|
||||
* I also left previous implementation below, it might be useful where number of centers close to number of points
|
||||
*/
|
||||
|
|
@ -78,7 +78,7 @@ public:
|
|||
|
||||
// To future reference, I also left prev implementation which is a variant of Fisher–Yates shuffle, however to keep `points` intact I use another vector to
|
||||
// store and swap indices.
|
||||
// where n = number of points; complexity = O(n), memory overhead = O(n)
|
||||
// where n = number of points; complexity = \cgalBigO{n}, memory overhead = \cgalBigO{n}
|
||||
/*
|
||||
template<class T1, class T2>
|
||||
void forgy_initialization(std::size_t number_of_centers, const std::vector<T1>& points, std::vector<T2>& centers)
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ type the property map must provide an index between 0 and the number of simplice
|
|||
the class `CGAL::Surface_mesh` as model of `FaceListGraph`.
|
||||
If you use the class `CGAL::Polyhedron_3`, you should use it with the item class `CGAL::Polyhedron_items_with_id_3`,
|
||||
for which default property maps are provided.
|
||||
This item class associates to each simplex an index that provides a \f$O(1)\f$ time access to the indices.
|
||||
This item class associates to each simplex an index that provides a \cgalBigO{1} time access to the indices.
|
||||
Note that the initialization of the property maps requires a call to `set_halfedgeds_items_id()`.
|
||||
|
||||
The access to the embedding of each vertex is done using a point vertex property map associating to each vertex a 3D point.
|
||||
|
|
@ -111,7 +111,7 @@ the kernel's number type to `double`, using the `std::sqrt`, and converting it b
|
|||
with directly supports square roots to get the most precision of the shortest path computations.
|
||||
|
||||
Using a kernel such as `CGAL::Exact_predicates_exact_constructions_kernel_with_sqrt` with this package will indeed provide the exact shortest paths,
|
||||
but it will be extremely slow. Indeed, in order to compute the distance along the surface, it is necessary to unfold sequences of faces, edge-to-edge, out into a common plane. The functor `SurfaceMeshShortestPathTraits::Construct_triangle_3_to_triangle_2_projection` provides an initial layout of the first face in a sequence, by rotating a given face into the `xy`-plane. `SurfaceMeshShortestPathTraits::Construct_triangle_3_along_segment_2_flattening` unfolds a triangle into the plane, using a specified segment as a base. Since this results in a chain of constructed triangles in the plane, the exact representation types used with this kernel (either `CORE::Expr` or `leda_real`) will process extremely slow, even on very simple inputs. This is because the exact representations will effectively add an \f$O(n)\f$ factor to every computation.
|
||||
but it will be extremely slow. Indeed, in order to compute the distance along the surface, it is necessary to unfold sequences of faces, edge-to-edge, out into a common plane. The functor `SurfaceMeshShortestPathTraits::Construct_triangle_3_to_triangle_2_projection` provides an initial layout of the first face in a sequence, by rotating a given face into the `xy`-plane. `SurfaceMeshShortestPathTraits::Construct_triangle_3_along_segment_2_flattening` unfolds a triangle into the plane, using a specified segment as a base. Since this results in a chain of constructed triangles in the plane, the exact representation types used with this kernel (either `CORE::Expr` or `leda_real`) will process extremely slow, even on very simple inputs. This is because the exact representations will effectively add an \cgalBigO{n} factor to every computation.
|
||||
|
||||
\section Surface_mesh_shortest_pathExamples Examples
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ namespace CGAL {
|
|||
|
||||
\brief Computes shortest surface paths from one or more source points on a surface mesh.
|
||||
|
||||
\details Uses an optimized variation of Chen and Han's \f$ O(n^2) \f$ algorithm by Xin and Wang.
|
||||
\details Uses an optimized variation of Chen and Han's \cgalBigO{n^2} algorithm by Xin and Wang.
|
||||
Refer to those respective papers for the details of the implementation.
|
||||
|
||||
\tparam Traits a model of `SurfaceMeshShortestPathTraits`.
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ Given a cycle drawn on a surface one can ask if the cycle can be continuously de
|
|||
The algorithm implemented in this package builds a data structure to efficiently answer queries of the following forms:
|
||||
- Given a combinatorial surface \f$\cal{M}\f$ and a closed combinatorial curve specified as a sequence of edges of \f$\cal{M}\f$, decide if the curve is homotopic to a simple one on \f$\cal{M}\f$.
|
||||
|
||||
The algorithm used is based on a paper by Despré and Lazarus \cgalCite{cgal:dl-cginc-19}, providing a \f$O(n + l\log{l})\f$-time algorithm where \f$n\f$ is the complexity of \f$\cal{M}\f$ and \f$l\f$ is the length of the path.
|
||||
The algorithm used is based on a paper by Despré and Lazarus \cgalCite{cgal:dl-cginc-19}, providing a \cgalBigO{n + l\log{l}}-time algorithm where \f$n\f$ is the complexity of \f$\cal{M}\f$ and \f$l\f$ is the length of the path.
|
||||
|
||||
\section SMTopology_HowToUse API Description
|
||||
|
||||
|
|
@ -322,7 +322,7 @@ As the algorithm inductively builds orderings, it has to determine a relative or
|
|||
The red edge is being processed and is compared against the pink edge which is the first edge of the path. The blue and green edges are the first diverging pair when tracing backward. The dashed line means that edges have not been processed yet. Since the green edge lies to the right of the blue edge around the vertex, the red edge must be to the right of the pink edge in the ordering.
|
||||
\cgalFigureEnd
|
||||
|
||||
The transverse orderings are stored in red-black trees, one for each edge of the quadrangulation. So each insertion or search takes \f$O(\log{l})\f$ time, where \f$l\f$ is the length of the closed curve.
|
||||
The transverse orderings are stored in red-black trees, one for each edge of the quadrangulation. So each insertion or search takes \cgalBigO{\log{l}} time, where \f$l\f$ is the length of the closed curve.
|
||||
|
||||
\subsubsection SMTopology_Simplicity_Test_Verification Verify Ordering
|
||||
After computing a tentative ordering within the edges of the path, we have to verify that such an ordering could result in an intersection free arrangement. Since there is no intersection within an edge, we only need to verify this for each vertex in the quadrangulation. Each vertex is naturally associated with a circular ordering of the incident path edges by concatenating clockwise the orderings computed for every incident edge in the quadrangulation. We consider the two consecutive edges composing a turn (one going in the vertex, one going out of the vertex) at the vertex being verified as a <em>pair</em>. The ordering at the vertex is intersection free if and only if there is no two pairs crossing each other according to the clockwise ordering around the vertex. In other words, for any two pairs \f$(a, a')\f$ and \f$(b, b')\f$, none of the subsequences \f$a, b, a', b'\f$ or \f$a, b', a', b\f$ should appear in the clockwise ordering. This is very similar to verifying balanced parentheses in a string. We traverse clockwise at each vertex and use a stack-based algorithm to verify in linear time that the ordering produces a cycle without self-intersection.
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ namespace CGAL {
|
|||
Let \f$ {\mathcal C} = \{C_1, C_2, \ldots, C_n\}\f$ be a set of
|
||||
curves. We wish to compute all intersection points between two curves
|
||||
in the set in an output-sensitive manner, without having to go over
|
||||
all \f$ O(n^2)\f$ curve pairs. To this end, we sweep an imaginary line
|
||||
all \cgalBigO{n^2} curve pairs. To this end, we sweep an imaginary line
|
||||
\f$ l\f$ from \f$ x = -\infty\f$ to \f$ x = \infty\f$ over the
|
||||
plane. While sweeping the plane, we keep track of the order of curves
|
||||
intersecting it. This order changes at a finite number of <I>event
|
||||
|
|
@ -36,8 +36,8 @@ employs certified computations. This traits class must be a model of
|
|||
the `ArrangementTraits_2` concept - see the Chapter \ref
|
||||
chapterArrangement_on_surface_2 "2D Arrangements" for more details.
|
||||
|
||||
The complexity of the surface-sweep algorithm is \f$ O((n +
|
||||
k)\log{n})\f$ where \f$ n\f$ is the number of the input curves and \f$
|
||||
The complexity of the surface-sweep algorithm is \cgalBigO{(n +
|
||||
k)\log{n}} where \f$ n\f$ is the number of the input curves and \f$
|
||||
k\f$ is the number of intersection points induced by these curves.
|
||||
|
||||
\section Surface_sweep_2Example Example
|
||||
|
|
|
|||
|
|
@ -513,11 +513,11 @@ Then, for each point to insert, it locates it by walking in the triangulation,
|
|||
using the previously inserted vertex as a "hint". Finally, the point is
|
||||
inserted.
|
||||
In the worst case scenario, without spatial sort, the expected complexity is
|
||||
\f$ O(n^{\lceil\frac{d}{2}\rceil+1}) \f$.
|
||||
\cgalBigO{(n^{\lceil\frac{d}{2}\rceil+1}}$.
|
||||
When the algorithm is run on uniformly distributed points, the localization complexity is
|
||||
\f$ O(n^{\frac{1}{d}}) \f$ and the size of the triangulation is \f$ O(n) \f$, which gives
|
||||
a complexity of \f$ O(n^{1+\frac{1}{d}}) \f$ for the insertion.
|
||||
With spatial sort and random points, one can expect a complexity of \f$ O(n\log n) \f$.
|
||||
\cgalBigO{(n^{\frac{1}{d}}}$ and the size of the triangulation is \cgalBigO{(n}$, which gives
|
||||
a complexity of \cgalBigO{(n^{1+\frac{1}{d}}}$ for the insertion.
|
||||
With spatial sort and random points, one can expect a complexity of \cgalBigO{(n\log n}$.
|
||||
Please refer to \cgalCite{boissonnat2009Delaunay} for more details.
|
||||
|
||||
We provide below (\cgalFigureRef{Triangulationfigbenchmarks100},
|
||||
|
|
|
|||
|
|
@ -54,20 +54,20 @@ All the types defined in `Triangulation_2<Traits,Tds>` are inherited.
|
|||
\cgalHeading{Implementation}
|
||||
|
||||
Insertion is implemented by inserting in the triangulation, then
|
||||
performing a sequence of Delaunay flips. The number of flips is \f$ O(d)\f$
|
||||
performing a sequence of Delaunay flips. The number of flips is \cgalBigO{d}
|
||||
if the new vertex is of degree \f$ d\f$ in the new triangulation. For
|
||||
points distributed uniformly at random, insertion takes time \f$ O(1)\f$ on
|
||||
points distributed uniformly at random, insertion takes time \cgalBigO{1} on
|
||||
average.
|
||||
|
||||
Removal calls the removal in the triangulation and then re-triangulates
|
||||
the hole in such a way that the Delaunay criterion is satisfied. Removal of a
|
||||
vertex of degree \f$ d\f$ takes time \f$ O(d^2)\f$.
|
||||
The degree \f$ d\f$ is \f$ O(1)\f$ for a random
|
||||
vertex of degree \f$ d\f$ takes time \cgalBigO{d^2}.
|
||||
The degree \f$ d\f$ is \cgalBigO{1} for a random
|
||||
vertex in the triangulation.
|
||||
|
||||
After a point location step, the nearest neighbor
|
||||
is found in time \f$ O(n)\f$ in the
|
||||
worst case, but in time \f$ O(1)\f$
|
||||
is found in time \cgalBigO{n} in the
|
||||
worst case, but in time \cgalBigO{1}
|
||||
for vertices distributed uniformly at random and any query point.
|
||||
|
||||
\sa `CGAL::Triangulation_2<Traits,Tds>`
|
||||
|
|
|
|||
|
|
@ -137,20 +137,20 @@ for faces of maximal dimension instead of faces.
|
|||
Locate is implemented by a line walk from a vertex of the face given
|
||||
as optional parameter (or from a finite vertex of
|
||||
`infinite_face()` if no optional parameter is given). It takes
|
||||
time \f$ O(n)\f$ in the worst case, but only \f$ O(\sqrt{n})\f$
|
||||
time \cgalBigO{n} in the worst case, but only \cgalBigO{\sqrt{n}}
|
||||
on average if the vertices are distributed uniformly at random.
|
||||
|
||||
Insertion of a point is done by locating a face that contains the
|
||||
point, and then splitting this face.
|
||||
If the point falls outside the convex hull, the triangulation
|
||||
is restored by flips. Apart from the location, insertion takes a time
|
||||
time \f$ O(1)\f$. This bound is only an amortized bound
|
||||
time \cgalBigO{1}. This bound is only an amortized bound
|
||||
for points located outside the convex hull.
|
||||
|
||||
Removal of a vertex is done by removing all adjacent triangles, and
|
||||
re-triangulating the hole. Removal takes time \f$ O(d^2)\f$ in the worst
|
||||
re-triangulating the hole. Removal takes time \cgalBigO{d^2} in the worst
|
||||
case, if \f$ d\f$ is the degree of the removed vertex,
|
||||
which is \f$ O(1)\f$ for a random vertex.
|
||||
which is \cgalBigO{1} for a random vertex.
|
||||
|
||||
The face, edge, and vertex iterators on finite features
|
||||
are derived from their counterparts visiting all (finite and infinite)
|
||||
|
|
|
|||
|
|
@ -412,7 +412,7 @@ The walk begins at a vertex of the face which
|
|||
is given
|
||||
as an optional argument or at an arbitrary vertex of the triangulation
|
||||
if no optional argument is given. It takes
|
||||
time \f$ O(n)\f$ in the worst case for Delaunay Triangulations, but only \f$ O(\sqrt{n})\f$
|
||||
time \cgalBigO{n} in the worst case for Delaunay Triangulations, but only \cgalBigO{\sqrt{n}}
|
||||
on average if the vertices are distributed uniformly at random.
|
||||
The class `Triangulation_hierarchy_2<Traits,Tds>`,
|
||||
described in section \ref Section_2D_Triangulations_Hierarchy,
|
||||
|
|
@ -423,14 +423,14 @@ Insertion of a point is done by locating a face that contains the
|
|||
point, and splitting this face into three new faces.
|
||||
If the point falls outside the convex hull, the triangulation
|
||||
is restored by flips. Apart from the location, insertion takes a
|
||||
time \f$ O(1)\f$. This bound is only an amortized bound
|
||||
time \cgalBigO{1}. This bound is only an amortized bound
|
||||
for points located outside the convex hull.
|
||||
|
||||
Removal of a vertex is done by removing all adjacent triangles, and
|
||||
re-triangulating the hole. Removal takes a time at most proportional to
|
||||
\f$ d^2\f$, where
|
||||
\f$ d\f$ is the degree of the removed vertex,
|
||||
which is \f$ O(1)\f$ for a random vertex.
|
||||
which is \cgalBigO{1} for a random vertex.
|
||||
|
||||
Displacement of a vertex is done by: first, verifying if the triangulation embedding
|
||||
remains planar after the displacement; if yes the vertex is directly placed at the new location; otherwise, a point is inserted at the new location
|
||||
|
|
@ -592,18 +592,18 @@ The insertion of a new point in the Delaunay triangulation
|
|||
is performed using first the insertion member function
|
||||
of the basic triangulation and second
|
||||
performing a sequence of flips to restore the Delaunay property.
|
||||
The number of flips that have to be performed is \f$ O(d)\f$
|
||||
The number of flips that have to be performed is \cgalBigO{d}
|
||||
if the new vertex has degree \f$ d\f$ in the updated
|
||||
Delaunay triangulation. For
|
||||
points distributed uniformly at random,
|
||||
each insertion takes time \f$ O(1)\f$ on
|
||||
each insertion takes time \cgalBigO{1} on
|
||||
average, once the point has been located in the triangulation.
|
||||
|
||||
Removal calls the removal in the triangulation and then re-triangulates
|
||||
the hole created in such a way that the Delaunay criterion is
|
||||
satisfied. Removal of a
|
||||
vertex of degree \f$ d\f$ takes time \f$ O(d^2)\f$.
|
||||
The degree \f$ d\f$ is \f$ O(1)\f$ for a random
|
||||
vertex of degree \f$ d\f$ takes time \cgalBigO{d^2}.
|
||||
The degree \f$ d\f$ is \cgalBigO{1} for a random
|
||||
vertex in the triangulation.
|
||||
When the degree of the removed vertex is small (\f$ \leq7\f$) a special
|
||||
procedure is used that allows to decrease global removal time by a factor of 2
|
||||
|
|
@ -611,14 +611,14 @@ for random points \cgalCite{d-vrtdd-09}.
|
|||
|
||||
The displacement of a vertex \f$ v\f$ at a point \f$ p\f$ to a new location \f$ p'\f$, first checks whether the triangulation embedding remains
|
||||
planar or not after moving \f$ v\f$ to \f$ p'\f$. If yes, it moves \f$ v\f$ to \f$ p'\f$ and simply performs a sequence of flips
|
||||
to restore the Delaunay property, which is \f$ O(d)\f$ where \f$ d\f$ is the degree of the vertex after the displacement.
|
||||
to restore the Delaunay property, which is \cgalBigO{d} where \f$ d\f$ is the degree of the vertex after the displacement.
|
||||
Otherwise, the displacement is done by inserting a vertex at the new location,
|
||||
and removing the obsolete vertex.
|
||||
The complexity is \f$ O(n)\f$ in the worst case, but only \f$ O(1 + \delta \sqrt{n})\f$ for evenly distributed vertices in the unit square, where \f$ \delta\f$ is the Euclidean distance between the new and old locations.
|
||||
The complexity is \cgalBigO{n} in the worst case, but only \cgalBigO{1 + \delta \sqrt{n}} for evenly distributed vertices in the unit square, where \f$ \delta\f$ is the Euclidean distance between the new and old locations.
|
||||
|
||||
After having performed a point location, the
|
||||
nearest neighbor of a point is found in time \f$ O(n)\f$ in the
|
||||
worst case, but in time \f$ O(1)\f$
|
||||
nearest neighbor of a point is found in time \cgalBigO{n} in the
|
||||
worst case, but in time \cgalBigO{1}
|
||||
for vertices distributed uniformly at random and any query point.
|
||||
|
||||
\subsection Subsection_2D_Triangulations_Delaunay_Terrain Example: a Delaunay Terrain
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue