mirror of https://github.com/CGAL/cgal
Revert "Replace tex-style quotes with quotes that you would expect, as there"
This reverts commit e65a8028ea.
Conflicts:
Approximate_min_ellipsoid_d/doc_tex/Bounding_volumes_ref/Approximate_min_ellipsoid_d.tex
Approximate_min_ellipsoid_d/documentation/mel.tex
Documentation/doxyassist.xml
Installation/doc_tex/Installation/usage.tex
Min_sphere_of_spheres_d/doc_tex/Bounding_volumes_ref/MinSphereOfSpheresTraits.tex
Optimisation_doc/doc_tex/Bounding_volumes/user_part.tex
Optimisation_doc/doc_tex/Inscribed_areas/user_part.tex
Width_3/doc_tex/Polytope_distance_d_ref/Width_3.tex
This commit is contained in:
parent
7cd0e93fa9
commit
6cc7d66415
|
|
@ -119,7 +119,7 @@ The experiments described above are neither exhaustive nor conclusive as we have
|
|||
|
||||
\item Primitives: Although the number of input primitives plays an obvious role in the final performance, their distribution in space is at least equally important in order to obtain a well-balanced AABB tree. Ideally the primitives must be evenly distributed in space and the long primitives spanning the bounding box of the tree root node must be avoided as much as possible. It is often beneficial to split these long primitives into smaller ones before constructing the tree, e.g., through recursive longest edge bisection for triangle surface meshes.
|
||||
|
||||
\item Function: The type of function queried plays another important role. Obviously the "exhaustive" functions, which list all intersections, are slower than the ones stopping after the first intersection. Within each of these functions the ones which call only intersection tests (do\_intersect(), number\_of\_intersected\_primitives(), any\_intersected\_primitive(), all\_intersected\_primitives()) are faster than the ones which explicitly construct the intersections (any\_intersection() and all\_intersections()).
|
||||
\item Function: The type of function queried plays another important role. Obviously the ``exhaustive'' functions, which list all intersections, are slower than the ones stopping after the first intersection. Within each of these functions the ones which call only intersection tests (do\_intersect(), number\_of\_intersected\_primitives(), any\_intersected\_primitive(), all\_intersected\_primitives()) are faster than the ones which explicitly construct the intersections (any\_intersection() and all\_intersections()).
|
||||
|
||||
\item Query: The type of query (e.g., line, ray, segment or plane used above) plays another role, strongly correlated with the type of function (exhaustive or not, and whether or not it constructs the intersections). When all intersection constructions are needed, the final execution times highly depend on the complexity of the general intersection object. For example a plane query generally intersects a surface triangle mesh into many segments while a segment query generally intersects a surface triangle mesh into few points. Finally, the location of the query in space also plays an obvious role in the performances, especially for the distance queries. Assuming the internal KD-tree constructed through the function \ccc{tree.accelerate_distance_queries()}, it is preferable to specify a query point already close to the surface triangle mesh so that the query traverses only few AABBs of the tree. For a large number of primitive data (greater than 2M faces in our experiments) however we noticed that it is not necessary (and sometimes even slower) to use all reference points when constructing the KD-tree. In these cases we recommend to specify trough the function \ccc{tree.accelerate_distance_queries(begin,end)} fewer reference points (typically not more than 100K) evenly distributed over the input primitives.
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,10 @@
|
|||
\textbf{Submission - Monique - with Sylvain's help...}
|
||||
|
||||
\begin{ccAdvanced}
|
||||
As in Curved-kernel, I use the "Advanced" environment in this
|
||||
As in Curved-kernel, I use the ``Advanced'' environment in this
|
||||
document to distinguish between my current submission to the \cgal\
|
||||
editorial board and plans for the future, related to ACS. The
|
||||
"Advanced" parts will disappear if/when this is released.
|
||||
``Advanced'' parts will disappear if/when this is released.
|
||||
\end{ccAdvanced}
|
||||
|
||||
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@
|
|||
|
||||
|
||||
Assume we are given a set $S$ of points in 2D or 3D and we'd like to
|
||||
have something like "the shape formed by these points." This is
|
||||
have something like ``the shape formed by these points.'' This is
|
||||
quite a vague notion and there are probably many possible
|
||||
interpretations, the $\alpha$-shape being one of them. Alpha shapes
|
||||
can be used for shape reconstruction from a dense unorganized set of
|
||||
|
|
@ -21,13 +21,13 @@ which is a linear approximation of the original shape \cite{bb-srmua-97t}.
|
|||
As mentioned in Edelsbrunner's and M\"ucke's paper \cite{em-tdas-94},
|
||||
one can intuitively think of an $\alpha$-shape as the
|
||||
following. Imagine a huge mass of ice-cream making up the space $\R^3$
|
||||
and containing the points as "hard" chocolate pieces. Using one of
|
||||
and containing the points as ``hard'' chocolate pieces. Using one of
|
||||
these sphere-formed ice-cream spoons we carve out all parts of the
|
||||
ice-cream block we can reach without bumping into chocolate pieces,
|
||||
thereby even carving out holes in the inside (e.g. parts not reachable
|
||||
by simply moving the spoon from the outside). We will eventually end
|
||||
up with a (not necessarily convex) object bounded by caps, arcs and
|
||||
points. If we now straighten all "round" faces to triangles and line
|
||||
points. If we now straighten all ``round'' faces to triangles and line
|
||||
segments, we have an intuitive description of what is called the
|
||||
$\alpha$-shape of $S$. Here's an example for this process in 2D (where
|
||||
our ice-cream spoon is simply a circle):
|
||||
|
|
@ -42,7 +42,7 @@ it's way too large. So we will never spoon up ice-cream lying in the
|
|||
inside of the convex hull of $S$, and hence the $\alpha$-shape for
|
||||
$\alpha \rightarrow \infty$ is the convex hull of $S$.\footnote{ice cream, ice cream!!!
|
||||
The wording of this introductory paragraphs is borrowed from Kaspar Fischer's
|
||||
" Introduction to Alpha Shapes" which can be found at
|
||||
`` Introduction to Alpha Shapes'' which can be found at
|
||||
http://people.inf.ethz.ch/fischerk/pubs/as.pdf.
|
||||
The picture has been taken from Walter Luh's homepage at
|
||||
http://www.stanford.edu/\~wluh/cs448b/alphashapes.html.}
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@
|
|||
\end{ccHtmlOnly}
|
||||
|
||||
Assume we are given a set $S$ of points in 2D or 3D and we'd like to
|
||||
have something like "the shape formed by these points." This is
|
||||
have something like ``the shape formed by these points.'' This is
|
||||
quite a vague notion and there are probably many possible
|
||||
interpretations, the alpha shape being one of them. Alpha shapes
|
||||
can be used for shape reconstruction from a dense unorganized set of
|
||||
|
|
@ -20,13 +20,13 @@ which is a linear approximation of the original shape \cite{bb-srmua-97t}.
|
|||
As mentioned in Edelsbrunner's and M\"ucke's paper \cite{em-tdas-94},
|
||||
one can intuitively think of an alpha shape as the
|
||||
following. Imagine a huge mass of ice-cream making up the space $\R^3$
|
||||
and containing the points as "hard" chocolate pieces. Using one of
|
||||
and containing the points as ``hard'' chocolate pieces. Using one of
|
||||
those sphere-formed ice-cream spoons we carve out all parts of the
|
||||
ice-cream block we can reach without bumping into chocolate pieces,
|
||||
thereby even carving out holes in the inside (e.g. parts not reachable
|
||||
by simply moving the spoon from the outside). We will eventually end
|
||||
up with a (not necessarily convex) object bounded by caps, arcs and
|
||||
points. If we now straighten all "round" faces to triangles and line
|
||||
points. If we now straighten all ``round'' faces to triangles and line
|
||||
segments, we have an intuitive description of what is called the
|
||||
alpha shape of $S$. Here's an example for this process in 2D (where
|
||||
our ice-cream spoon is simply a circle):
|
||||
|
|
@ -43,7 +43,7 @@ it's way too large. So we will never spoon up ice-cream lying in the
|
|||
inside of the convex hull of $S$, and hence the alpha shape for
|
||||
$\alpha \rightarrow \infty$ is the convex hull of $S$.\footnote{ice cream, ice cream!!!
|
||||
The wording of this introductory paragraphs is borrowed from Kaspar Fischer's
|
||||
" Introduction to Alpha Shapes" which can be found at
|
||||
`` Introduction to Alpha Shapes'' which can be found at
|
||||
http://people.inf.ethz.ch/fischerk/pubs/as.pdf.
|
||||
The picture has been taken from Walter Luh's homepage at
|
||||
http://www.stanford.edu/\~wluh/cs448b/alphashapes.html.}
|
||||
|
|
@ -73,7 +73,7 @@ of the Delaunay triangulation.
|
|||
For a given value of $\alpha$, the alpha complex includes
|
||||
all the simplices in the Delaunay triangulation which have
|
||||
an empty circumscribing sphere with squared radius equal or smaller than $\alpha$.
|
||||
Here "empty" means that the open sphere
|
||||
Here ``empty'' means that the open sphere
|
||||
do not include any points of $S$.
|
||||
The alpha shape is then simply the domain covered by the simplices
|
||||
of the alpha complex (see \cite{em-tdas-94}).
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ of the Delaunay triangulation.
|
|||
For a given value of $\alpha$, the alpha complex includes
|
||||
all the simplices in the Delaunay triangulation which have
|
||||
an empty circumscribing sphere with squared radius equal or smaller than $\alpha$.
|
||||
Here "empty" means that the open sphere
|
||||
Here ``empty'' means that the open sphere
|
||||
do not include any points of $S$.
|
||||
The alpha shape is then simply the domain covered by the simplices
|
||||
of the alpha complex (see \cite{em-tdas-94}).
|
||||
|
|
|
|||
|
|
@ -432,7 +432,7 @@ vertex (we distinguish between a vertex that corresponds to the left
|
|||
endpoint of the inserted curve and a vertex corresponding to its right
|
||||
endpoint), we have to create a new vertex that corresponds to the other
|
||||
endpoint of the curve and to connect the two vertices by a pair of
|
||||
twin halfedges that form an "antenna" emanating from the boundary
|
||||
twin halfedges that form an ``antenna'' emanating from the boundary
|
||||
of an existing connected component (note that if the existing vertex
|
||||
used to be isolated, this operation is actually equivalent to forming
|
||||
a new hole inside the face that contains this vertex).
|
||||
|
|
|
|||
|
|
@ -218,7 +218,7 @@ zone computation terminates when an intersection with an arrangement's
|
|||
edge/vertex is found or when the right endpoint is reached.
|
||||
A given point-location object is used for locating the left endpoint
|
||||
of the given curve in the existing arrangement. By default, the function
|
||||
uses the "walk along line" point-location strategy --- namely an
|
||||
uses the ``walk along line'' point-location strategy --- namely an
|
||||
instance of the class
|
||||
\ccc{Arr_walk_along_line_point_location}.
|
||||
If the given curve is $x$-monotone then the traits
|
||||
|
|
@ -233,7 +233,7 @@ intersects in the order that they are
|
|||
discovered when traversing the $x$-monotone curve from left to right.
|
||||
The function uses a given point-location object to locate the left
|
||||
endpoint of the given $x$-monotone curve. By default, the function
|
||||
uses the "walk along line" point-location strategy.
|
||||
uses the ``walk along line'' point-location strategy.
|
||||
The function requires that the traits class will model the
|
||||
\ccc{ArrangementXMonotoneTraits_2} concept.
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ topological structure. Most notifier functions belong to this
|
|||
category. The relevant local changes include:
|
||||
\begin{itemize}
|
||||
\item A new vertex is constructed and associated with a point.
|
||||
\item An edge\footnote{The term "edge" refers here to a pair of twin
|
||||
\item An edge\footnote{The term ``edge'' refers here to a pair of twin
|
||||
half-edges.} is constructed and associated with an $x$-monotone
|
||||
curve.
|
||||
\item An edge is split into two edges.
|
||||
|
|
|
|||
|
|
@ -111,7 +111,7 @@ arrangement and moves downward toward the query point until
|
|||
locating the arrangement cell containing it.
|
||||
%
|
||||
\item \ccc{Arr_landmarks_point_location<Arrangement,Generator>}
|
||||
uses a set of "landmark" points whose location in the
|
||||
uses a set of ``landmark'' points whose location in the
|
||||
arrangement is known. Given a query point, it uses a \kdtree\ to
|
||||
find the nearest landmark and then traverses the straight line
|
||||
segment connecting this landmark to the query point.
|
||||
|
|
@ -142,7 +142,7 @@ pointer to an arrangement object and operate directly on it.
|
|||
Attaching such point-location objects to an existing arrangement
|
||||
has virtually no running-time cost at all, but the query time is
|
||||
linear in the size of the arrangement (the performance of the
|
||||
"walk" strategy is much better in practice, but its worst-case
|
||||
``walk'' strategy is much better in practice, but its worst-case
|
||||
performance is linear). Using these strategies is therefore
|
||||
recommended only when a relatively small number of point-location
|
||||
queries are issued by the application, or when the arrangement is
|
||||
|
|
|
|||
|
|
@ -438,7 +438,7 @@ segment-traits class. This kernel use interval arithmetic to filter the
|
|||
exact computations. The program reads a set of line segments with integer
|
||||
coordinates from a file and computes their arrangement. By default it
|
||||
opens the \ccc{fan_grids.dat} input-file, located in the examples folder,
|
||||
which contains $104$ line segments that form four "fan-like" grids and
|
||||
which contains $104$ line segments that form four ``fan-like'' grids and
|
||||
induce a dense arrangement, as illustrated in
|
||||
Figure~\ref{arr_fig:predef_kernels}(a):
|
||||
|
||||
|
|
@ -953,9 +953,9 @@ The following example demonstrates the construction of an
|
|||
arrangement of six rational arcs---four unbounded arcs and two
|
||||
bounded ones---as depicted in Figure~\ref{arr_fig:ex_unb_rat}. Note
|
||||
the usage of the constructors of an entire rational function and of
|
||||
an infinite "ray" of such a function. Also observe that the hyperbolas
|
||||
an infinite ``ray'' of such a function. Also observe that the hyperbolas
|
||||
$y = \pm\frac{1}{x}$ and $y = \pm\frac{1}{2x}$ never intersect, although
|
||||
they have common vertical and horizontal asymptotes, so very "thin"
|
||||
they have common vertical and horizontal asymptotes, so very ``thin''
|
||||
unbounded faces are created between them:
|
||||
|
||||
\ccIncludeExampleCode{Arrangement_on_surface_2/unbounded_rational_functions.cpp}
|
||||
|
|
@ -1043,7 +1043,7 @@ of a polynomial $f(x,y)$ in two variables. The curve is uniquely defined
|
|||
by $f$ (although several polynomials might define the same curve).
|
||||
We call $f$ a \emph{defining polynomial} of $C$.
|
||||
|
||||
% When talking about algebraic curves, we use the term "segment" for a
|
||||
% When talking about algebraic curves, we use the term ``segment'' for a
|
||||
% closed continuous subset of an algebraic curve
|
||||
% such that each interior point can be parameterized uniquely, as a function in
|
||||
% $x$ or $y$. In other words, there is no self-intersection in the interior
|
||||
|
|
@ -1054,7 +1054,7 @@ or by (weakly) $x$-monotone segments for algebraic curves
|
|||
(Such a segment is not necessarily the maximal possible
|
||||
(weakly) x-monotone segment; see below.)
|
||||
When talking about algebraic curves,
|
||||
we use the term "segment" for a continuous, possibly non-linear subset
|
||||
we use the term ``segment'' for a continuous, possibly non-linear subset
|
||||
of an algebraic curve~-- see the definition below.
|
||||
There are no restrictions on the algebraic curve, that means,
|
||||
we support unbounded curves, vertical curves or segments, and isolated points.
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ Halfedges are drawn as thin arrows. The vertices $v_1, \ldots, v_8$
|
|||
lie at infinity, and are not associated with valid points. The
|
||||
halfedges that connect them are fictitious, and are not associated
|
||||
with concrete curves. The face denoted $f_0$ (lightly shaded)
|
||||
is the fictitious "unbounded face" which lies outside the bounding
|
||||
is the fictitious ``unbounded face'' which lies outside the bounding
|
||||
rectangle (dashed) that bounds the actual arrangement. The four
|
||||
fictitious vertices $v_{\rm bl}, v_{\rm tl}, v_{\rm br}$ and
|
||||
$v_{\rm tr}$ represent the four corners of the bounding
|
||||
|
|
@ -211,7 +211,7 @@ bounding bounding rectangle:
|
|||
general the curve end also goes to $y = \pm\infty$ (see for instance
|
||||
the vertices $v_1$, $v_3$, $v_6$ and $v_8$ in
|
||||
Figure~\ref{arr_fig:unb_dcel}). For our convenience, we will always
|
||||
take a "tall" enough bounding rectangle and treat such vertices as
|
||||
take a ``tall'' enough bounding rectangle and treat such vertices as
|
||||
lying on either the left or right rectangle edges (that is, if a curve
|
||||
is defined at $x = -\infty$, its left end will be represented by
|
||||
a vertex on the left edge of the bounding rectangle, and if it is
|
||||
|
|
|
|||
|
|
@ -12,11 +12,11 @@
|
|||
\label{arr_ref:lm_pl}
|
||||
|
||||
The \ccRefName\ class implements a Jump \& Walk algorithm, where special
|
||||
points, referred to as "landmarks", are chosen in a preprocessing stage,
|
||||
points, referred to as ``landmarks'', are chosen in a preprocessing stage,
|
||||
their place in the arrangement is found, and they are inserted into a
|
||||
data-structure that enables efficient nearest-neighbor search (a
|
||||
{\sc Kd}-tree). Given a query point, the nearest landmark is located and a
|
||||
"walk" strategy is applied from the landmark to the query point.
|
||||
``walk'' strategy is applied from the landmark to the query point.
|
||||
|
||||
There are various strategies to select the landmark set in the
|
||||
arrangement, where the strategy is determined by the
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ using \ccc{Make_x_monotone_2}.
|
|||
%does not have any real roots in this interval (thus the arc does not
|
||||
%contain any vertical asymptotes). Our traits class is also capable of
|
||||
%representing functions defined over an unbounded $x$-range, namely
|
||||
%a "ray" defined over $(-\infty, x_{\rm max}]$ or over $[x_{\rm min}, \infty)$,
|
||||
%a ``ray'' defined over $(-\infty, x_{\rm max}]$ or over $[x_{\rm min}, \infty)$,
|
||||
%or a function defined over the entire real $x$-range. Note that a
|
||||
%rational arc may be unbounded even if it is defined over some bounded interval.
|
||||
%In these cases $Q$ has zeros in this interval. That is, the user is able to construct
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ A model of the \ccRefName\ concept can be attached to an \ccc{Arrangement_2}
|
|||
instance and answer vertical ray-shooting queries on this arrangement.
|
||||
Namely, given a \ccc{Arrangement_2::Point_2} object, representing a point in
|
||||
the plane, it returns the arrangement feature (edge or vertex) that lies
|
||||
strictly above it (or below it). By "strictly" we mean that if the
|
||||
strictly above it (or below it). By ``strictly'' we mean that if the
|
||||
query point lies on an arrangement edge (or on an arrangement vertex) this
|
||||
edge will {\em not} be the query result, but the feature lying above or
|
||||
below it. (An exception to this rule is the degenerate situation where the
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ The walk-along-a-line point-location object (just like the na\"{i}ve one)
|
|||
does not use any auxiliary data structures. Thus, attaching it to an
|
||||
existing arrangement takes constant time, and any ongoing updates to
|
||||
this arrangement do not affect the point-location object.
|
||||
It is therefore recommended to use the "walk" point-location strategy
|
||||
It is therefore recommended to use the ``walk'' point-location strategy
|
||||
for arrangements that are constantly changing, especially if the number
|
||||
of issued queries is not large.
|
||||
|
||||
|
|
|
|||
|
|
@ -259,7 +259,7 @@ a mutable handle. For example, the result of a point-location query is
|
|||
a non-mutable handle for the arrangement cell containing the query point.
|
||||
Assume that the query point lies on a edge, so we obtain a
|
||||
\ccc{Halfedge_const_handle}; if we wish to use this handle and remove the
|
||||
edge, we first need to cast away its "constness".
|
||||
edge, we first need to cast away its ``constness''.
|
||||
|
||||
\ccMethod{Vertex_handle non_const_handle (Vertex_const_handle v);}
|
||||
{casts the given constant vertex handle to an equivalent mutable handle.}
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ edge/vertex is found or when the right endpoint is reached.
|
|||
|
||||
A given point-location object is used for locating the left endpoint
|
||||
of the given curve in the existing arrangement. By default, the function
|
||||
uses the "walk along line" point-location strategy --- namely an
|
||||
uses the ``walk along line'' point-location strategy --- namely an
|
||||
instance of the class
|
||||
\ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ its left endpoint and computing its zone until reaching the right endpoint.
|
|||
|
||||
The given point-location object \ccc{pl} is used to locate the left
|
||||
endpoints of the $x$-monotone curves. By default, the function uses the
|
||||
"walk along line" point-location strategy --- namely an instance of
|
||||
``walk along line'' point-location strategy --- namely an instance of
|
||||
the class \ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
|
||||
\ccPrecond{If provided, \ccc{pl} must be attached to the given arrangement
|
||||
|
|
@ -74,7 +74,7 @@ computing its zone until reaching the right endpoint.
|
|||
|
||||
The given point-location object \ccc{pl} is used to locate the left
|
||||
endpoints of the $x$-monotone curves. By default, the function uses the
|
||||
"walk along line" point-location strategy --- namely an instance of
|
||||
``walk along line'' point-location strategy --- namely an instance of
|
||||
the class \ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
|
||||
\ccPrecond{If provided, \ccc{pl} is attached to the given arrangement
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ left to right).
|
|||
|
||||
A given point-location object is used for answering the two point-location
|
||||
queries on the given curve endpoints. By default, the function uses the
|
||||
"walk along line" point-location strategy --- namely, an instance of the
|
||||
``walk along line'' point-location strategy --- namely, an instance of the
|
||||
class \ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
|
||||
\ccInclude{CGAL/Arrangement_2.h}
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ point in the given arrangement. If the point conincides with an existing
|
|||
vertex, there is nothing left to do; if it lies on an edge, the edge is
|
||||
split at the point. Otherwise, the point is contained inside a face, and is
|
||||
inserted as an isolated vertex inside this face.
|
||||
By default, the function uses the "walk along line" point-location
|
||||
By default, the function uses the ``walk along line'' point-location
|
||||
strategy --- namely, an instance of the class
|
||||
\ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
In either case, the function returns a handle for the vertex associated
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ construct the overlaid \dcel{} that represents the resulting arrangement.
|
|||
Computes the overlay of two arrangements \ccc{arr1} and \ccc{arr2}, and sets
|
||||
the output arrangement \ccc{res} to represent the overlaid arrangement.
|
||||
\ccPrecond{\ccc{res} does not refer to either \ccc{arr1} or \ccc{arr2}
|
||||
(that is, "self overlay" is not supported).}
|
||||
(that is, ``self overlay'' is not supported).}
|
||||
|
||||
\ccGlobalFunction{template <class GeomTraitsA, class GeomTraitsB,
|
||||
class GeomTraitsRes, class TopTraitsA,
|
||||
|
|
@ -43,7 +43,7 @@ and sets the output arrangement \ccc{res} to represent the overlaid
|
|||
arrangement. It employs the default overlay-traits, which practically does
|
||||
nothing.
|
||||
\ccPrecond{\ccc{res} does not refer to either \ccc{arr1} or \ccc{arr2}
|
||||
(that is, "self overlay" is not supported).}
|
||||
(that is, ``self overlay'' is not supported).}
|
||||
|
||||
%%%%
|
||||
|
||||
|
|
@ -62,7 +62,7 @@ Computes the overlay of two arrangements with history \ccc{arr1} and
|
|||
represent the overlaid arrangement. The function also constructs a
|
||||
consolidated set of curves that induce \ccc{res}.
|
||||
\ccPrecond{\ccc{res} does not refer to either \ccc{arr1} or \ccc{arr2}
|
||||
(that is, "self overlay" is not supported).}
|
||||
(that is, ``self overlay'' is not supported).}
|
||||
|
||||
\ccGlobalFunction{template<typename Traits, typename Dcel1, typename Dcel2,
|
||||
typename ResDcel>
|
||||
|
|
@ -75,7 +75,7 @@ represent the overlaid arrangement. The function also constructs a
|
|||
consolidated set of curves that induce \ccc{res}. It employs the default
|
||||
overlay-traits, which practically does nothing.
|
||||
\ccPrecond{\ccc{res} does not refer to either \ccc{arr1} or \ccc{arr2}
|
||||
(that is, "self overlay" is not supported).}
|
||||
(that is, ``self overlay'' is not supported).}
|
||||
|
||||
\ccRequirements
|
||||
\begin{itemize}
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ and physically decomposing the arrangement into pseudo-trapezoids. To do
|
|||
this, it is convenient to process the vertices in an ascending
|
||||
$xy$-lexicographic order. The visible objects are therefore returned through
|
||||
an output iterator, which pairs each finite arrangement vertex with the two
|
||||
features it "sees", such that the vertices are given in ascending
|
||||
features it ``sees'', such that the vertices are given in ascending
|
||||
$xy$-lexicographic order.
|
||||
|
||||
\ccInclude{CGAL/Arr_vertical_decomposition_2.h}
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ discovered when traversing the $x$-monotone curve from left to right.
|
|||
|
||||
A given point-location object is used for answering point-location queries
|
||||
during the insertion process. By default, the function uses the
|
||||
"walk along line" point-location strategy --- namely an instance of the
|
||||
``walk along line'' point-location strategy --- namely an instance of the
|
||||
class \ccc{Arr_walk_along_line_point_location<Arrangement_2<Traits,Dcel> >}.
|
||||
|
||||
%%%%
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ property map.
|
|||
|
||||
The data themselves may be stored in the vertex or edge, or they may
|
||||
be stored in an external data structure, or they may be computed on
|
||||
the fly. This is an "implementation detail" of the particular property map.
|
||||
the fly. This is an ``implementation detail'' of the particular property map.
|
||||
|
||||
\smallskip
|
||||
Property maps in the Boost manuals: \path|http://www.boost.org/libs/property_map/doc/property_map.html|
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ the edge weight is not well defined for infinite edges. For algorithms
|
|||
that make use of the edge weight the user must therefore
|
||||
define a \ccAnchor{http://www.boost.org/libs/graph/doc/filtered_graph.html}
|
||||
{\ccc{boost::filtered_graph}} or pass a property map to the
|
||||
algorithm that returns "infinity" for infinite edges.
|
||||
algorithm that returns ``infinity'' for infinite edges.
|
||||
|
||||
|
||||
Note also that when you derive from the class \ccc{CGAL::Triangulation_2}
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ numbers of degree~2, written by Olivier Devillers
|
|||
\cite{cgal:dfmt-amafe-00,cgal:dfmt-amafe-02}, and that are still used
|
||||
in the current implementation of \ccc{CGAL::Root_of_2}.
|
||||
|
||||
Some work was then done in the direction of a "kernel" for
|
||||
Some work was then done in the direction of a ``kernel'' for
|
||||
\cgal.\footnote{Monique Teillaud, First Prototype of a
|
||||
\cgal\ Geometric Kernel with Circular Arcs, Technical Report
|
||||
ECG-TR-182203-01, 2002\\Sylvain Pion and Monique Teillaud,
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@
|
|||
\ccRefConceptPage{CircularKernel::ConstructCircularTargetVertex_2}
|
||||
|
||||
%\footnote{technical remark: the previous functors have a different name
|
||||
%"Circular" because the operators() don't have the same return type
|
||||
%``Circular'' because the operators() don't have the same return type
|
||||
%as the existing CGAL functors... it would be nice to find a way to avoid
|
||||
%this, but I don't know any technique for this.}
|
||||
|
||||
|
|
|
|||
|
|
@ -402,7 +402,7 @@ of the combinatorial map.
|
|||
% the set of \cells{i} form a partition of the set of darts D, i.e.
|
||||
%
|
||||
% Je le dis ici a titre d'exemple, c'est a dire je recommende
|
||||
% que tu fasse un passe pour obtenir plus de "phrases sans $..$"
|
||||
% que tu fasse un passe pour obtenir plus de ``phrases sans $..$''
|
||||
|
||||
A last important property of cells is that for all dimensions \emph{i} the
|
||||
set of \cells{i} forms a partition of the set of darts \emph{D}, i.e. for
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ by the first and last points in this sequence.
|
|||
\ccPrecond %\ccIndexSubitem[C]{ch_graham_andrew_scan}{preconditions}
|
||||
The range [\ccc{first},\ccc{beyond}) contains at least
|
||||
two different points.
|
||||
The points in [\ccc{first},\ccc{beyond}) are "sorted" with respect
|
||||
The points in [\ccc{first},\ccc{beyond}) are ``sorted'' with respect
|
||||
to $pq$, {\it i.e.}, the sequence of points in
|
||||
[\ccc{first},\ccc{beyond}) define a counterclockwise polygon,
|
||||
for which the Graham-Sklansky-procedure \cite{s-mcrm-72} works.}
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ attributes:
|
|||
\begin{itemize}
|
||||
\item \textbf{Expensive}\ccIndexSubitemDef{checks}{expensive}
|
||||
checks take considerable time to compute.
|
||||
"Considerable" is an imprecise phrase. Checks that add less than 10
|
||||
``Considerable'' is an imprecise phrase. Checks that add less than 10
|
||||
percent to the execution time of their routine are not expensive.
|
||||
Checks that can double the execution time are. Somewhere in between
|
||||
lies the border line.
|
||||
|
|
|
|||
|
|
@ -59,13 +59,13 @@ there are \emph{convincing} reasons.
|
|||
\ccIndexSubitem{naming scheme}{abbreviations}
|
||||
({\em e.g.}, use
|
||||
\ccc{Triangulation} instead of \ccc{Tri}). The only exceptions
|
||||
might be standard geometric abbreviations (such as "CH" for "convex
|
||||
hull") and standard data structure abbreviations (such as "DS" for
|
||||
"data structure"). Unfortunately, the long names that result from
|
||||
might be standard geometric abbreviations (such as ``CH'' for ``convex
|
||||
hull'') and standard data structure abbreviations (such as ``DS'' for
|
||||
``data structure''). Unfortunately, the long names that result from
|
||||
the absence of abbreviations are known to cause problems with some
|
||||
compilers.\ccIndexMainItem{long-name problem}
|
||||
% See Section~\ref{sec:long_name_problem}
|
||||
% for further information about the so-called "long-name problem."
|
||||
% for further information about the so-called ``long-name problem.''
|
||||
\ccIndexSubitemBegin{naming scheme}{capitalization}
|
||||
\item Names of constants are uppercase ({\em e.g.}, \ccc{ORIGIN}).
|
||||
\ccModifierCrossRefOff
|
||||
|
|
@ -112,7 +112,7 @@ there are \emph{convincing} reasons.
|
|||
\ccIndexSubitemBegin{data structures}{naming}
|
||||
\begin{itemize}
|
||||
\item Names for geometric data structures and algorithms should follow
|
||||
the "spirit" of the rules given so far, \eg~a data structure for
|
||||
the ``spirit'' of the rules given so far, \eg~a data structure for
|
||||
triangulations in the plane is named \ccc{Triangulation_2}, and a
|
||||
convex hull algorithm in 3-space is named \ccc{convex_hull_3}.
|
||||
\item Member functions realizing predicates should start with \ccc{is_} or
|
||||
|
|
@ -186,7 +186,7 @@ Here are the naming rules:
|
|||
objects like \ccc{Has_on_bounded_side_2}, \ccc{Is_degenerate_2},
|
||||
and \ccc{Is_horizontal_2}. According to the current kernel we
|
||||
also have \ccc{Left_turn_2}. For reasons of consistency with
|
||||
\stl, all "less-than"-objects start with \ccc{Less_},
|
||||
\stl, all ``less-than''-objects start with \ccc{Less_},
|
||||
\eg,~\ccc{Less_xy_2}. Further examples are
|
||||
\ccc{Less_distance_to_point_2} and
|
||||
\ccc{Less_distance_to_line_2}. However, we have \ccc{Equal_2},
|
||||
|
|
@ -250,7 +250,7 @@ For example, the function that returns an instance of the
|
|||
\ccIndexSubitemBegin{source files}{naming scheme}
|
||||
|
||||
\begin{itemize}
|
||||
\item File names should be chosen in the "spirit" of the naming rules given
|
||||
\item File names should be chosen in the ``spirit'' of the naming rules given
|
||||
above.
|
||||
\item If a single geometric object, data structure, or algorithm is provided
|
||||
in a single file, its name (and its capitalization) should be used for
|
||||
|
|
@ -266,7 +266,7 @@ For example, the function that returns an instance of the
|
|||
rejected by the submission script.
|
||||
\item The names of files should not contain any characters not allowed by
|
||||
all the platforms the library supports. In particular, it should not
|
||||
contain the characters ':', '*', or '\ '.
|
||||
contain the characters `:', `*', or `\ '.
|
||||
\item Internal header files -- which are not documented to the user -- should
|
||||
have {\tt /internal/} as a directory higher up in their hierarchy.
|
||||
For example {\tt CGAL/internal/foo.h} or
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ class My_geo_object : public Handle
|
|||
\end{verbatim}
|
||||
|
||||
The class \ccc{My_geo_object} is responsible for allocating and constructing
|
||||
the \ccc{My_rep} object "on the heap".
|
||||
the \ccc{My_rep} object ``on the heap''.
|
||||
Typically, a constructor call is forwarded to a
|
||||
corresponding constructor of \ccc{My_rep}. The address of the new \ccc{My_rep} is assigned to \ccc{PTR} inherited from \ccc{Handle}, e.g.:
|
||||
|
||||
|
|
|
|||
|
|
@ -35,9 +35,9 @@ assume. Especially you may assume that the compiler
|
|||
\item supports member templates
|
||||
\item support for \texttt{std::iterator\_traits}.
|
||||
\end{itemize}
|
||||
Still, there are many bugs (sometimes known as "features") left in the
|
||||
Still, there are many bugs (sometimes known as ``features'') left in the
|
||||
compilers. Have a look at the list of (non-obsolete) workarounds in
|
||||
Section~\ref{sec:workaround_flags} to get an idea of which "features" are
|
||||
Section~\ref{sec:workaround_flags} to get an idea of which ``features'' are
|
||||
still present.
|
||||
|
||||
\ccIndexMainItemBegin{configuration}
|
||||
|
|
@ -167,11 +167,11 @@ operating system and compiler that is defined as follows.
|
|||
|
||||
\begin{description}
|
||||
\item[$<$arch$>$] is the system architecture as defined by ``{\tt
|
||||
uname -p}" or "\texttt{uname -m}",
|
||||
uname -p}'' or ``\texttt{uname -m}'',
|
||||
\item[$<$os$>$] is the operating system as defined by ``\texttt{uname
|
||||
-s}'',
|
||||
\item[$<$os-version$>$] is the operating system version as defined by
|
||||
"\texttt{uname -r}",
|
||||
``\texttt{uname -r}'',
|
||||
\item[$<$comp$>$] is the basename of the compiler executable (if it
|
||||
contains spaces, these are replaced by "-"), and
|
||||
\item[$<$comp-version$>$] is the compiler's version number (which
|
||||
|
|
@ -196,12 +196,12 @@ These test programs reside in the directory
|
|||
where \verb|$(CGAL_ROOT)| represents the installation directory for the library.
|
||||
The names of all testfiles, which correspond to the names of the flags,
|
||||
\ccIndexSubitem{workaround flags}{names}
|
||||
start with "\texttt{CGAL\_CFG\_}" followed by
|
||||
start with ``\texttt{CGAL\_CFG\_}'' followed by
|
||||
\begin{itemize}
|
||||
\item \textit{either} a description of a bug ending with
|
||||
"\texttt{\_BUG}"
|
||||
``\texttt{\_BUG}''
|
||||
\item \textit{or} a description of a feature starting with
|
||||
"\texttt{NO\_}".
|
||||
``\texttt{NO\_}''.
|
||||
\end{itemize}
|
||||
For any of these files a corresponding flag is set in the
|
||||
platform-specific configuration file, iff either compilation or execution
|
||||
|
|
|
|||
|
|
@ -34,9 +34,9 @@ Nevertheless, the generic implementation of the kernel primitives that are
|
|||
parameterized by the arithmetic (more precisely, by a number type)
|
||||
assumes that the arithmetic plugged in does behave as real arithmetic.
|
||||
The generic code does not and should not (otherwise it would slow down
|
||||
"exact" number types) deal with any potential imprecision. There are
|
||||
a number of (third-party provided) "exact" number types available for use
|
||||
with \cgal, where "exact" means
|
||||
``exact'' number types) deal with any potential imprecision. There are
|
||||
a number of (third-party provided) ``exact'' number types available for use
|
||||
with \cgal, where ``exact'' means
|
||||
that all decisions (comparison operations) are correct and that the
|
||||
representation of the numbers allows for refinement to an arbitrary precision,
|
||||
if needed.
|
||||
|
|
@ -48,7 +48,7 @@ If roots of polynomials are needed, then the solution is to use
|
|||
|
||||
|
||||
% Most notably, \ccc{leda_real}s provide easy-to-use adaptive
|
||||
% "exact" arithmetic for the basic operations and $\sqrt[k]{\ }$ operations.
|
||||
% ``exact'' arithmetic for the basic operations and $\sqrt[k]{\ }$ operations.
|
||||
% \lcTex{
|
||||
% \begin{center}
|
||||
% \includegraphics[width=8cm]{Developers_manual/fig/use_real}
|
||||
|
|
@ -79,7 +79,7 @@ in the constructed objects was already part of the input. An example is
|
|||
computing the lexicographically smaller point for two given points.
|
||||
|
||||
\cgal\ provides generic implementations of geometric primitives. These assume
|
||||
"exact computation". This may or may not work, depending on the actual
|
||||
``exact computation''. This may or may not work, depending on the actual
|
||||
numerical input data. \cgal\ also provides\footnote{at present, for the
|
||||
dimension 2/3 Cartesian kernel(s) only.}
|
||||
% The homogeneous counterpart still needs revision.}
|
||||
|
|
|
|||
|
|
@ -867,15 +867,15 @@ regarding the organization are provided:
|
|||
that the entire description was contained in the introduction.
|
||||
|
||||
\item The section describing software design should be labeled (you guessed
|
||||
it) "Software Design."
|
||||
it) ``Software Design.''
|
||||
|
||||
\item Example programs should have entries in the table of contents and
|
||||
the user should be able to figure out quite easily what this example
|
||||
illustrated from the table of contents. This means, examples should
|
||||
be in sections of their own and the sections should have descriptive
|
||||
names (\textit{i.e.}, "Example Constructing a Vanilla Cone" instead
|
||||
of just "Example", unless this is a subsection of a section
|
||||
entitled "Vanilla Cone").
|
||||
names (\textit{i.e.}, ``Example Constructing a Vanilla Cone'' instead
|
||||
of just ``Example'', unless this is a subsection of a section
|
||||
entitled ``Vanilla Cone'').
|
||||
|
||||
\item The examples should appear near the things of which they are
|
||||
examples. So for chapters describing more than one class (such as
|
||||
|
|
@ -1140,7 +1140,7 @@ The name of the concept is provided as the argument to this environment.
|
|||
Under the \verb|\ccDefinition| heading, the concept should be described
|
||||
followed by the set of required functions (one or more
|
||||
\ccc{operator()} methods). Under the heading \verb|\ccRefines|
|
||||
you should list concepts that this one "inherits" from and
|
||||
you should list concepts that this one ``inherits'' from and
|
||||
under \verb|\ccHasModels| list the classes that are models of this
|
||||
concept.
|
||||
\ccIndexSubitemEnd{reference manual}{function object concepts}
|
||||
|
|
@ -1367,7 +1367,7 @@ any problems.
|
|||
\subsection*{Problem --- Unresolved figure references in HTML}
|
||||
\ccIndexSubsubitemBegin{manuals}{HTML}{figure references}
|
||||
|
||||
%Figure references in the HTML manual appear as "[ref:fig:xxx]"
|
||||
%Figure references in the HTML manual appear as ``[ref:fig:xxx]''
|
||||
%instead of as a link to the figure.
|
||||
|
||||
\begin{description}
|
||||
|
|
|
|||
|
|
@ -250,7 +250,7 @@ have to observe when you design a traits class yourself.
|
|||
\ccc{Less_xy_2}, there is no reason to require \ccc{Greater_xy_2},
|
||||
because the latter can be constructed from the former. In general,
|
||||
designing a good traits class requires a deep understanding of the
|
||||
algorithm it is made for. Finding the "right" set of geometric
|
||||
algorithm it is made for. Finding the ``right'' set of geometric
|
||||
primitives required by the algorithm can be a nontrivial task.
|
||||
However, spending effort on that task decreases the effort needed
|
||||
later to implement traits classes and increases the ease of use of
|
||||
|
|
|
|||
|
|
@ -134,7 +134,7 @@ Note however, that these operations usually involve the projection of
|
|||
is in the $x$-range of $c$, and lies to its left when the curve is
|
||||
traversed from its $xy$-lexicographically smaller endpoint to its
|
||||
larger endpoint). We have the precondition that both surfaces are
|
||||
defined "above" $c$, and their relative $z$-order is the same for
|
||||
defined ``above'' $c$, and their relative $z$-order is the same for
|
||||
some small enough neighborhood of points above $c$.
|
||||
\end{itemize}}
|
||||
|
||||
|
|
@ -150,7 +150,7 @@ Note however, that these operations usually involve the projection of
|
|||
is in the $x$-range of $c$, and lies to its right when the curve is
|
||||
traversed from its $xy$-lexicographically smaller endpoint to its
|
||||
larger endpoint). We have the precondition that both surfaces are
|
||||
defined "below" $c$, and their relative $z$-order is the same for
|
||||
defined ``below'' $c$, and their relative $z$-order is the same for
|
||||
some small enough neighborhood of points below $c$.
|
||||
\end{itemize}}
|
||||
|
||||
|
|
|
|||
|
|
@ -275,7 +275,7 @@ generated points have the same last coordinate $-5$.
|
|||
\ccIncludeExampleCode{Generator/grid_d.cpp}
|
||||
|
||||
The output of previous example corresponds to the points of this
|
||||
figure depicted in red or pink (pink points are "inside" the cube).
|
||||
figure depicted in red or pink (pink points are ``inside'' the cube).
|
||||
The output is:
|
||||
\begin{verbatim}
|
||||
Generating 20 grid points in 4D
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ The default traits class \ccc{Default_traits} is the kernel in which
|
|||
|
||||
\ccImplementation
|
||||
The implementation is based on the method of eliminating self-intersections in
|
||||
a polygon by using so-called "2-opt" moves. Such a move eliminates an
|
||||
a polygon by using so-called ``2-opt'' moves. Such a move eliminates an
|
||||
intersection between two edges by reversing the order of the vertices between
|
||||
the edges. No more than $O(n^3)$ such moves are required to simplify a polygon
|
||||
defined on $n$ points \cite{ls-utstp-82}.
|
||||
|
|
|
|||
|
|
@ -215,7 +215,7 @@ In order to build the \cgal\ libraries, you need a \CC\ compiler.
|
|||
\section{Configuring \cgal\ with CMake\label{sec:configwithcmake}}
|
||||
|
||||
In order to configure, build, and install the \cgal\ libraries, examples and
|
||||
demos, you need \cmake, a cross-platform "makefile generator".
|
||||
demos, you need \cmake, a cross-platform ``makefile generator''.
|
||||
If \cmake\ is not installed already you can obtain it from \cmakepage.
|
||||
\cmake\ version~2.6.2 or higher is required. On Windows, \cmake{}
|
||||
version~2.8.6 or higher is required, for a proper support of DLL's
|
||||
|
|
@ -822,7 +822,7 @@ make install # install
|
|||
}
|
||||
|
||||
If you use a generator that produces IDE files (for Visual Studio for instance) there will be an optional
|
||||
\texttt{INSTALL} project, which you will be able to \emph{"build"} to execute the installation step.
|
||||
\texttt{INSTALL} project, which you will be able to \emph{``build''} to execute the installation step.
|
||||
|
||||
\begin{ccAdvanced}
|
||||
|
||||
|
|
|
|||
|
|
@ -79,16 +79,16 @@ configuration.
|
|||
behaviour is usually needed for (graphical) demos. \\
|
||||
If the parameter is not given, the script creates \textbf{one executable for each given
|
||||
source file}.
|
||||
\item [\texttt{-c com1:com2:...}] Lists components ("com1",
|
||||
"com2") of \cgal\ to which the executable(s) should be linked. Valid components are \cgal's
|
||||
libraries (i.e.~"Core", "ImageIO", "Qt3" and "Qt4"; note
|
||||
that it only make sense to either pick "Qt3" or "Qt4") and all
|
||||
preconfigured 3rd party software, such as "MPFI", "RS3",
|
||||
or "LAPACK"). An example is \texttt{-c Core:GMP:RS3:MPFI}
|
||||
\item [\texttt{-c com1:com2:...}] Lists components (``com1'',
|
||||
``com2'') of \cgal\ to which the executable(s) should be linked. Valid components are \cgal's
|
||||
libraries (i.e.~``Core'', ``ImageIO'', ``Qt3'' and ``Qt4''; note
|
||||
that it only make sense to either pick ``Qt3'' or ``Qt4'') and all
|
||||
preconfigured 3rd party software, such as ``MPFI'', ``RS3'',
|
||||
or ``LAPACK''). An example is \texttt{-c Core:GMP:RS3:MPFI}
|
||||
|
||||
\item [\texttt{-b boost1:boost2:...}] Lists components ("boost1",
|
||||
"boost2") of \boost\ to which the executable(s) should be
|
||||
linked. Valid options are, for instance, "filesystem" or "program\_options".
|
||||
\item [\texttt{-b boost1:boost2:...}] Lists components (``boost1'',
|
||||
``boost2'') of \boost\ to which the executable(s) should be
|
||||
linked. Valid options are, for instance, ``filesystem'' or ``program\_options''.
|
||||
|
||||
\end{description}
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ a set of data points $\mathcal{P}$, the natural neighbor coordinates
|
|||
associated to $\mathcal{P}$ are defined from the Voronoi diagram of
|
||||
$\mathcal{P}$. When simulating the insertion of a query point
|
||||
$\mathbf{x}$ into the Voronoi diagram of $\mathcal{P}$, the potential
|
||||
Voronoi cell of $\mathbf{x}$ "steals" some parts from the existing
|
||||
Voronoi cell of $\mathbf{x}$ ``steals'' some parts from the existing
|
||||
cells.
|
||||
|
||||
\begin{figure}[ht!]
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ for 3D triangles and efficient intersection tests for bounding boxes.
|
|||
\subsection{Acknowledgment}
|
||||
|
||||
This work was supported
|
||||
by the Graduiertenkolleg 'Algorithmische Diskrete Mathematik',
|
||||
by the Graduiertenkolleg `Algorithmische Diskrete Mathematik',
|
||||
under grant DFG We 1265/2-1,
|
||||
and by ESPRIT IV Long Term Research Projects No.~21957 (CGAL)
|
||||
and No.~28155 (GALIA).
|
||||
|
|
|
|||
|
|
@ -53,7 +53,7 @@ representation. For example, points in 2D have a constructor with
|
|||
three arguments as well (the three homogeneous coordinates of the
|
||||
point). The common interfaces parameterized with a kernel class allow
|
||||
one to develop code independent of the chosen representation. We said
|
||||
"families" of models, because both families are parameterized too.
|
||||
``families'' of models, because both families are parameterized too.
|
||||
A user can choose the number type used to represent the coordinates.
|
||||
|
||||
For reasons that will become evident later, a kernel class provides
|
||||
|
|
|
|||
|
|
@ -9,4 +9,4 @@ or orientation of polygons.
|
|||
For this purpose \cgal\ provides several projection traits classes,
|
||||
which are a model of traits class concepts of 2D triangulations,
|
||||
2D polygon and 2D convex hull traits classes. The projection traits classes
|
||||
are listed in the "{\em Is Model for the Concepts}" sections of the concepts.
|
||||
are listed in the ``{\em Is Model for the Concepts}'' sections of the concepts.
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ $y$ and $z$ axis of the coordinate system.
|
|||
Although they are represented in a canonical form by only two
|
||||
vertices, namely the lexicographically smallest and largest vertex
|
||||
with respect to Cartesian $xyz$ coordinates, we provide
|
||||
functions for "accessing" the other vertices as well.
|
||||
functions for ``accessing'' the other vertices as well.
|
||||
|
||||
Iso-oriented cuboids and bounding boxes are quite similar. The
|
||||
difference however is that bounding boxes have always double coordinates,
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ $y$ axis of the coordinate system.
|
|||
|
||||
Although they are represented in a canonical form by only two
|
||||
vertices, namely the lower left and the upper right vertex, we provide
|
||||
functions for "accessing" the other vertices as well. The vertices
|
||||
functions for ``accessing'' the other vertices as well. The vertices
|
||||
are returned in counterclockwise order.
|
||||
|
||||
Iso-oriented rectangles and bounding boxes are quite similar. The
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ example, points have a constructor with a range of coordinates plus a
|
|||
common denominator (the $d+1$ homogeneous coordinates of the point).
|
||||
The common interfaces parameterized with a representation class allow
|
||||
one to develop code independent of the chosen representation. We said
|
||||
"families" of models, because both families are parameterized too.
|
||||
``families'' of models, because both families are parameterized too.
|
||||
A user can choose the number type used to represent the coordinates
|
||||
and the linear algebra module used to calculate the result of
|
||||
predicates and constructions.
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ As with most kinetic data structures, \ccc{Kinetic::Sort<Traits,
|
|||
case a sorted doubly linked list), each element of which has a
|
||||
corresponding certificate in the event queue maintained by the
|
||||
simulator. In the case of sorting, there is one certificate maintained
|
||||
for each "edge" between two consecutive elements in the list.
|
||||
for each ``edge'' between two consecutive elements in the list.
|
||||
|
||||
On creation, the data structure is passed a copy of the
|
||||
\ccc{Kinetic::SimulationTraits} for this simulation, which it saves for
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
\ccDefinition
|
||||
|
||||
This functor allows you to create certificate objects of some type.
|
||||
The models of this "concept" take some set of arguments which depend
|
||||
The models of this ``concept'' take some set of arguments which depend
|
||||
on the certificate being computed (for example three points for a two
|
||||
dimensional orientation) followed by either one or two instances of
|
||||
the \ccc{Kinetic::Simulator::Time} concept. The functions either
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ This class provides a base to use for implementing events. The base provides def
|
|||
|
||||
\ccMethod{void audit(Key this_key);}{Audit that this is a valid event.}
|
||||
|
||||
\ccMethod{std::ostream& write(std::ostream&) const;}{Write "Event base" to the stream.}
|
||||
\ccMethod{std::ostream& write(std::ostream&) const;}{Write ``Event base'' to the stream.}
|
||||
|
||||
%\ccMethod{void degenerate_events(Event_key this_event, Event_key other_event);}{This event and the event referenced by \ccc{k} belong to the same KDS and occur simultaneously. This function call gives the KDS a chance to handle }
|
||||
|
||||
|
|
|
|||
|
|
@ -550,7 +550,7 @@ adding another information to vertices. For that, we need to define
|
|||
our own items class. The difference with the
|
||||
\ccc{CGAL::Linear_cell_complex_min_items} class is about the definition of
|
||||
the vertex attribute where we use a \ccc{CGAL::Cell_attribute_with_point}
|
||||
with a non void info. In this example, the "vextex color" is just
|
||||
with a non void info. In this example, the ``vextex color'' is just
|
||||
given by an \ccc{int} (the second template parameter of the
|
||||
\ccc{CGAL::Cell_attribute_with_point}). Lastly, we define the
|
||||
\ccc{Average_functor} class in order to set the color of a vertex
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
\begin{ccPkgDescription}{Introduction\label{Pkg:GeneralIntroduction}}
|
||||
\ccPkgHowToCiteCgal{cgal:eb-gi-12}
|
||||
\ccPkgSummary{
|
||||
This chapter explains how the manual is organized, presents a "Hello World"
|
||||
This chapter explains how the manual is organized, presents a ``Hello World''
|
||||
program, and gives recommendations for further readings.}
|
||||
%
|
||||
\ccPkgIntroducedInCGAL{1.0}
|
||||
|
|
|
|||
|
|
@ -72,8 +72,8 @@ has the same size.
|
|||
\ccIncludeExampleCode{Convex_hull_2/array_convex_hull_2.cpp}
|
||||
|
||||
|
||||
All \cgal\ header files are in the subdirectory "include/CGAL". All \cgal\
|
||||
classes and functions are in the namespace "CGAL". The geometric
|
||||
All \cgal\ header files are in the subdirectory ``include/CGAL''. All \cgal\
|
||||
classes and functions are in the namespace ``CGAL''. The geometric
|
||||
primitives, like the point type, are defined in a kernel. \cgal\ comes
|
||||
with several kernels, and as the convex hull algorithm only makes
|
||||
comparisons of coordinates and orientation tests of input points,
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@
|
|||
%% problems of computing the smallest enclosing sphere of a point set and
|
||||
%% the problem of computing the distance between two convex hulls seem to
|
||||
%% be quite different, they are both instances of a more general
|
||||
%% optimization problem, named "quadratic programming". To be more
|
||||
%% optimization problem, named ``quadratic programming''. To be more
|
||||
%% specific, \cgal's solutions for the two quadratic programming
|
||||
%% instances just mentioned are based on a general quadratic programming
|
||||
%% solver, which will be documented as a separate unit in future
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ the derived classes.
|
|||
|
||||
|
||||
%All these commented function should appear
|
||||
%under their impl form in the concept for "Mesher level"
|
||||
%under their impl form in the concept for ``Mesher level''
|
||||
|
||||
|
||||
%\ccMethod{ bool no_longer_element_to_refine();}
|
||||
|
|
|
|||
|
|
@ -275,7 +275,7 @@ In the case of the Lloyd smoother,
|
|||
the interpolation is linear in each Voronoi cell of the set of mesh vertices.
|
||||
In the case of the odt-smoother, the interpolation is linear in each cell
|
||||
of the Delaunay triangulation of the mesh vertices,
|
||||
hence the name odt which is an abbreviation for "optimal Delaunay triangulation".
|
||||
hence the name odt which is an abbreviation for ``optimal Delaunay triangulation''.
|
||||
|
||||
|
||||
\begin{figure}[ht]
|
||||
|
|
|
|||
|
|
@ -201,7 +201,7 @@ advanced operations allowing a direct manipulation of the triangulation.
|
|||
{Same as above with \ccc{f=(c,i)}.}
|
||||
|
||||
\ccMethod{void set_dimension(Vertex_handle v, int dimension);}
|
||||
{Sets the "dimension" of vertex \ccc{v}. The dimension is an integer attached to the vertex.
|
||||
{Sets the ``dimension'' of vertex \ccc{v}. The dimension is an integer attached to the vertex.
|
||||
When the concept \ccRefName\ is used for mesh generation this integer is used to store
|
||||
the dimension of the lowest dimensional face of the input complex including the vertex.}
|
||||
\ccGlue
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ are sorted by the angle they form with the $x$-axis. As the two
|
|||
input polygons are convex, their edges are already sorted by the
|
||||
angle they form with the $x$-axis. The Minkowski sum can therefore be
|
||||
computed in $O(m + n)$ time, by starting from two bottommost vertices
|
||||
in $P$ and in $Q$ and performing "merge sort" on the edges.
|
||||
in $P$ and in $Q$ and performing ``merge sort'' on the edges.
|
||||
|
||||
\begin{figure}[t]
|
||||
\begin{ccTexOnly}
|
||||
|
|
|
|||
|
|
@ -117,7 +117,7 @@ Infimaximal here means that its geometric extend is always large
|
|||
enough (but finite for our intuition). Assume you approach the box
|
||||
with an affine point, then this point is always inside the box. The
|
||||
same holds for straight lines; they always intersect the box. There
|
||||
are more accurate notions of "large enough", but the previous
|
||||
are more accurate notions of ``large enough'', but the previous
|
||||
propositions are enough at this point. Due to the fact that the
|
||||
infimaximal box is included in the plane map, the vertices and edges
|
||||
are partitioned with respect to this box.
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ returns the regularized polyhedron (closure of interior).}
|
|||
returns \ccc{N} $\cap$ \ccc{N1}. }
|
||||
|
||||
\ccMethod{Nef_polyhedron_2<T> join(const Nef_polyhedron_2<T>& N1) ;}{
|
||||
returns \ccc{N} $\cup$ \ccc{N1}. Note that "union" is a keyword of C++
|
||||
returns \ccc{N} $\cup$ \ccc{N1}. Note that ``union'' is a keyword of C++
|
||||
and cannot be used for this operation.}
|
||||
|
||||
\ccMethod{Nef_polyhedron_2<T> difference(const Nef_polyhedron_2<T>& N1) ;}{
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ storage size, and many algorithms are simple. On the other side, this
|
|||
object class is not closed under boolean set operations, as many
|
||||
examples can illustrate, such as the Figure shown above that can be
|
||||
generated using boolean set operations on cubes. The vertices bounding
|
||||
the tunnel, or the edge connecting the "roof" with the cube are
|
||||
the tunnel, or the edge connecting the ``roof'' with the cube are
|
||||
non-manifold situations.
|
||||
|
||||
In our implementation of Nef polyhedra in 3D, we offer a B-rep data
|
||||
|
|
|
|||
|
|
@ -237,7 +237,7 @@ affected by this, he must take care to reset it to 'round to the nearest'
|
|||
before they are executed.
|
||||
|
||||
% Note also that NaNs are not handled, so be careful with that
|
||||
% (especially if you 'divide by zero').
|
||||
% (especially if you `divide by zero').
|
||||
|
||||
Notes:\\
|
||||
\begin{itemize}
|
||||
|
|
|
|||
|
|
@ -74,12 +74,12 @@ The stream operations are available as well.
|
|||
They assume that corresponding stream operators for type \ccc{NT} exist.
|
||||
|
||||
\ccFunction{std::ostream& operator<<(std::ostream& out, const Quotient<NT>& q);}
|
||||
{writes \ccc{q} to ostream \ccc{out} in format "{\tt n/d}", where
|
||||
{writes \ccc{q} to ostream \ccc{out} in format ``{\tt n/d}'', where
|
||||
{\tt n}$==$\ccc{q.numerator()} and {\tt d}$==$\ccc{q.denominator()}.}
|
||||
|
||||
\ccFunction{std::istream& operator>>(std::istream& in, Quotient<NT>& q);}
|
||||
{reads \ccc{q} from istream \ccc{in}. Expected format is
|
||||
"{\tt n/d}", where {\tt n} and {\tt d} are of type \ccc{NT}.
|
||||
``{\tt n/d}'', where {\tt n} and {\tt d} are of type \ccc{NT}.
|
||||
A single {\tt n} which is not followed by a {\tt /}\ is also
|
||||
accepted and interpreted as {\tt n/1}.}
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
% =============================================================================
|
||||
% The CGAL Reference Manual
|
||||
% Chapter: Geometric Optimisation
|
||||
% Content: reference pages of 'Optimisation_d_traits'
|
||||
% Content: reference pages of `Optimisation_d_traits'
|
||||
% -----------------------------------------------------------------------------
|
||||
% file : doc_tex/basic/Optimisation/Optimisation_ref/main_Optimisation_d.tex
|
||||
% package: Optimisation_basic
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@
|
|||
\ccDefinition
|
||||
|
||||
Function object that determines if a sequence of points represents a
|
||||
valid partition polygon or not, where "valid" can assume any of several
|
||||
valid partition polygon or not, where ``valid'' can assume any of several
|
||||
meanings ({\it e.g.}, convex or $y$-monotone).
|
||||
\ccIndexSubitem{polygon partitioning}{valid}
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ and $O(n)$ space for a polygon with $n$ vertices and guarantees nothing
|
|||
about the number of polygons produced with respect to the optimal number.
|
||||
Three functions are provided for producing
|
||||
convex partitions. Two of these functions produce approximately optimal
|
||||
partitions and one results in an optimal partition, where "optimal" is
|
||||
partitions and one results in an optimal partition, where ``optimal'' is
|
||||
defined in terms of the number of partition polygons. The two functions
|
||||
that implement approximation algorithms are guaranteed to produce no more
|
||||
than four times the optimal number of convex pieces. The optimal partitioning
|
||||
|
|
|
|||
|
|
@ -195,9 +195,9 @@ of the hierarchical structure described in
|
|||
chapter~\ref{chapter-Triangulation3} to the periodic case.
|
||||
|
||||
\section{Software Design\label{P3Triangulation3-sec-design}}
|
||||
We have chosen the prefix "Periodic\_3" to emphasize that the
|
||||
We have chosen the prefix ``Periodic\_3'' to emphasize that the
|
||||
triangulation is periodic in all three directions of space. There are
|
||||
also "cylindrical" periodicities where the triangulation is periodic
|
||||
also ``cylindrical'' periodicities where the triangulation is periodic
|
||||
only in one or two directions of space.
|
||||
|
||||
The two main classes \ccc{Periodic_3_Delaunay_triangulation_3} and
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ The Boost Property Map Library also contains a few adaptors that convert commonl
|
|||
Free functions \ccc{get} and \ccc{put} allow getting and putting information through a property map.
|
||||
The data themselves may be stored in the element, or they may
|
||||
be stored in an external data structure, or they may be computed on
|
||||
the fly. This is an "implementation detail" of the particular property map.
|
||||
the fly. This is an ``implementation detail'' of the particular property map.
|
||||
|
||||
\smallskip
|
||||
Property maps in the Boost manuals: \path|http://www.boost.org/libs/property_map/doc/property_map.html|
|
||||
|
|
|
|||
|
|
@ -184,7 +184,7 @@ example, given a halfedge handle \ccc{h} we can write \ccc{h->next()}
|
|||
to get a halfedge handle to the next halfedge, \ccc{h->opposite()} for
|
||||
the opposite halfedge, \ccc{h->vertex()} for the incident vertex at
|
||||
the tip of \ccc{h}, and so on. The output of the program will be
|
||||
"\verb|1 0 0\n0 1 0\n0 0 1\n0 0 0\n|".
|
||||
``\verb|1 0 0\n0 1 0\n0 0 1\n0 0 0\n|''.
|
||||
|
||||
%\newpage
|
||||
\ccIncludeExampleCode{Polyhedron/polyhedron_prog_tetra.cpp}
|
||||
|
|
|
|||
|
|
@ -71,8 +71,8 @@ objective function value, etc.
|
|||
|
||||
You can in particular get \emph{certificates} for the solution. In short,
|
||||
these are proofs that the output is correct. Thus, if you don't believe
|
||||
in the solution (whether it says "optimally solved", "infeasible",
|
||||
or "unbounded"), you can verify it yourself by using the certificates.
|
||||
in the solution (whether it says ``optimally solved'', ``infeasible'',
|
||||
or ``unbounded''), you can verify it yourself by using the certificates.
|
||||
Section \ref{sec:QP-certificates} says more about this.
|
||||
|
||||
\subsection{Efficiency}
|
||||
|
|
@ -179,10 +179,10 @@ $D$ in the relevant case where $r$, the rank of $D$, is small.
|
|||
|
||||
Nevertheless, the solver contains some runtime checks
|
||||
that may detect that the matrix $D$ is not positive-semidefinite. But
|
||||
you may as well get an "optimal solution" in this case, even with
|
||||
you may as well get an ``optimal solution'' in this case, even with
|
||||
valid certificates. The validity of these certificates, however,
|
||||
depends on $D$ being positive-semidefinite; if this is not the case, the
|
||||
certificates only prove that the solver has found a "critical point" of
|
||||
certificates only prove that the solver has found a ``critical point'' of
|
||||
your (nonconvex) program, but there are no guarantees whatsoever that
|
||||
this is a global optimum, or even a local optimum.
|
||||
|
||||
|
|
@ -379,8 +379,8 @@ objective function:
|
|||
Figure \ref{fig:QP-first_lp} shows how this looks like. We will not
|
||||
visualize a linear objective function with contour lines but with
|
||||
arrows instead. The arrow represents the (direction) of the vector $-c$,
|
||||
and we are looking for a feasible solution that is "extreme" in the direction
|
||||
of the arrow. In our small example, this is the unique point "on" the
|
||||
and we are looking for a feasible solution that is ``extreme'' in the direction
|
||||
of the arrow. In our small example, this is the unique point ``on'' the
|
||||
two constraints $x_1+x_2\leq 7$ and $-x_1+x_2\leq 4$, the point
|
||||
$(10/3,11/3)$ marked with a black dot. The optimal objective function
|
||||
value is $-32(11/3)+64=-160/3$.
|
||||
|
|
@ -474,7 +474,7 @@ Finally, a dedicated model and function is available for nonnnegative linear
|
|||
programs as well. Let's take our linear program from above and remove
|
||||
the constraint $y\leq 4$ to obtain a nonnegative linear program. At
|
||||
the same time we remove the constant objective function term to get
|
||||
a "minimal" input and a "shortest" program; the optimal value is
|
||||
a ``minimal'' input and a ``shortest'' program; the optimal value is
|
||||
$-32(11/3)=-352/3$.
|
||||
|
||||
\[
|
||||
|
|
@ -577,7 +577,7 @@ readable code.
|
|||
|
||||
\section{Important Variables and Constraints}
|
||||
If you have a solution $\qpx^*$ of a linear or quadratic program,
|
||||
the "important" variables are typically the ones that are not on
|
||||
the ``important'' variables are typically the ones that are not on
|
||||
their bounds. In case of a nonnegative program, these are the nonzero
|
||||
variables. Going back to the example of the previous Section
|
||||
\ref{sec:QP-iterators}, we can easily interpret their
|
||||
|
|
@ -614,7 +614,7 @@ shows how these can be accessed, using the iterators
|
|||
\ccc{basic_constraint_indices_end()}.
|
||||
|
||||
Again, we have a disagreement
|
||||
between "basic" and "important": it is guaranteed that all
|
||||
between ``basic'' and ``important'': it is guaranteed that all
|
||||
basic constraints are satisfied with equality at $\qpx^*$, but there
|
||||
might be non-basic constraints that are satisfied with equality
|
||||
as well.
|
||||
|
|
@ -676,7 +676,7 @@ of the certificates.
|
|||
Sometimes it is necessary to alter the default behavior of the solver.
|
||||
This can be done by passing a suitably prepared object of the class
|
||||
\ccc{Quadratic_program_options} to the solution functions. Most options
|
||||
concern "soft" issues like verbosity, but there are two notable case
|
||||
concern ``soft'' issues like verbosity, but there are two notable case
|
||||
where it is of critical importance to be able to change the defaults.
|
||||
|
||||
\subsection{Exponent Overflow in Double Using Floating-Point Filters\label{sec:QP-customization-filtering}}
|
||||
|
|
@ -719,7 +719,7 @@ sequence of six iterations over and over again. By switching to
|
|||
no cycling occurs.
|
||||
|
||||
In general, the verbose mode can be of use when you are not sure whether
|
||||
the solver "has died", or whether it simply takes very long to solve
|
||||
the solver ``has died'', or whether it simply takes very long to solve
|
||||
your problem. We refer to the class \ccc{Quadratic_program_options}
|
||||
for further details.
|
||||
|
||||
|
|
|
|||
|
|
@ -15,8 +15,8 @@ of \ccVar\ to \ccc{val}. An existing entry is overwritten.
|
|||
|
||||
\ccMethod{void set_r (int i, CGAL::Comparison_result rel);}
|
||||
{sets the entry $\qprel_i$ of \ccVar\ to \ccc{rel}. \ccc{CGAL::SMALLER}
|
||||
means that the $i$-th constraint is of type "$\leq$", \ccc{CGAL::EQUAL}
|
||||
means "$=$", and \ccc{CGAL::LARGER} encodes "$\geq$". An existing entry
|
||||
means that the $i$-th constraint is of type ``$\leq$'', \ccc{CGAL::EQUAL}
|
||||
means ``$=$'', and \ccc{CGAL::LARGER} encodes ``$\geq$''. An existing entry
|
||||
is overwritten. \ccVar\ is enlarged if necessary to accomodate this entry.}
|
||||
|
||||
\ccMethod{void set_l (int j, bool is_finite, const NT& val = NT(0));}
|
||||
|
|
|
|||
|
|
@ -138,7 +138,7 @@ $S_{N}:=\sigma(S \setminus B_{S})$, if $\sigma$ denotes the bijection
|
|||
$S \rightarrow I$.
|
||||
The set of active constraints
|
||||
$C=E \cup S_{N}$ is
|
||||
introduced, such that a 'reduced' basis matrix $\check{A}_{B}$
|
||||
introduced, such that a `reduced' basis matrix $\check{A}_{B}$
|
||||
with respect to $B$ is defined as
|
||||
\begin{equation}
|
||||
\label{def:red_basis_phaseI}
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ Ratio Test Step~2. However, this does not always work, for
|
|||
$B^{\prime}:=B \setminus \{i\} \cup \{j\}$, $M_{B^{\prime}}$ may be regular
|
||||
whereas $M_{B \cup \{j\}}$ and $M_{B \setminus \{i\}}$ are both singular, such
|
||||
that both ways to compute the new basis inverse $M_{B^{\prime}}^{-1}$ from
|
||||
$M_{B}^{-1}$ by 'growing' and 'shrinking' updates,
|
||||
$M_{B}^{-1}$ by `growing' and `shrinking' updates,
|
||||
that add or remove one column and row per update,
|
||||
are blocked. The simplest way
|
||||
to solve this problem is to have a replacement step in phaseII as well.
|
||||
|
|
@ -477,7 +477,7 @@ A_{C, B_{O}}^{T} & 2D_{B_{O}, B_{O}}
|
|||
\end{equation}
|
||||
only the Updates~(\ref{update:o_rep_o}) and~(\ref{update:s_rep_s}) are
|
||||
replacement updates, whereas the Updates~(\ref{update:s_rep_o})
|
||||
and~(\ref{update:o_rep_s}) are 'shrinking' and 'growing' updates.
|
||||
and~(\ref{update:o_rep_s}) are `shrinking' and `growing' updates.
|
||||
Since the solver uses the reduced basis inverse $\check{M}_{B}^{-1}$ we can
|
||||
directly apply the update described in the last section only for
|
||||
Updates~(\ref{update:o_rep_o}) and~(\ref{update:s_rep_s}), although we could
|
||||
|
|
|
|||
|
|
@ -252,7 +252,7 @@ CGAL::Qt_widget_standard_toolbar *stoolbar;
|
|||
\end{ccExampleCode}
|
||||
To use it, in the constructor of \ccc{My\_window}, it is added:
|
||||
\begin{ccExampleCode}
|
||||
stoolbar = new CGAL::Qt_widget_standard_toolbar(widget, this, "Standard toolbar'');
|
||||
stoolbar = new CGAL::Qt_widget_standard_toolbar(widget, this, ``Standard toolbar'');
|
||||
\end{ccExampleCode}
|
||||
In this tutorial you can play a little bit with the standard toolbar
|
||||
but you will see probably something that is not quite pleasant. If you
|
||||
|
|
@ -402,7 +402,7 @@ QToolButton *get_point_button; //the toolbar button
|
|||
\end{ccExampleCode}
|
||||
add the button in the toolbar:
|
||||
\begin{ccExampleCode}
|
||||
get_point_button = new QToolButton(tools_toolbar, "Get Point");
|
||||
get_point_button = new QToolButton(tools_toolbar, ``Get Point'');
|
||||
get_point_button->setPixmap(QPixmap( (const char**)point_xpm ));
|
||||
\end{ccExampleCode}
|
||||
To make the button a toggle button:
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ seed, or by using the state functions as described below.
|
|||
|
||||
\ccConstructor{ Random( );}{
|
||||
introduces a variable \ccVar\ of type \ccClassTemplateName. The
|
||||
seed is chosen "randomly", depending on the system time.}
|
||||
seed is chosen ``randomly'', depending on the system time.}
|
||||
|
||||
\ccConstructor{ Random( unsigned int seed);}{
|
||||
introduces a variable \ccVar\ of type \ccClassTemplateName\
|
||||
|
|
|
|||
|
|
@ -1779,7 +1779,7 @@
|
|||
\ccc{std::pair}. \ccRefName\ is a heterogeneous quadruple: it holds
|
||||
one object of type \ccc{T1}, one of type \ccc{T2}, one of type
|
||||
\ccc{T3}, and one of type \ccc{T4}. A \ccRefName\ is much like a
|
||||
container, in that it "owns" its elements. It is not actually a
|
||||
container, in that it ``owns'' its elements. It is not actually a
|
||||
model of container, though, because it does not support the standard
|
||||
methods (such as iterators) for accessing the elements of a
|
||||
container.
|
||||
|
|
|
|||
|
|
@ -949,7 +949,7 @@ created as a range tree (segment tree) with creation variable
|
|||
\ccStyle{Sublayer\_type s}, which is a prototype of a two-dimensional
|
||||
range tree (segment tree). Because a range tree or a segment tree
|
||||
is expecting a prototype for its creation, a recursion anchor which
|
||||
builds dimension "zero" is needed.
|
||||
builds dimension ``zero'' is needed.
|
||||
\ccStyle{Tree\_anchor} described in
|
||||
section~\ref{CGALTreeanchor} fulfills all these requirements.
|
||||
All tree classes (range tree, segment tree, tree anchor) are
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@ The algorithm works well even when the inferred surface is composed of several c
|
|||
\subsubsection{Contouring Parameters}
|
||||
|
||||
Our implementation of the Poisson surface reconstruction algorithm computes an implicit function represented as a piecewise linear function over the tetrahedra of a 3D Delaunay triangulation constructed from the input points then refined through Delaunay refinement. For this reason, any iso-surface is also piecewise linear and hence may contain sharp creases. As the contouring algorithm \ccc{CGAL::make_surface_mesh()} expects a smooth implicit function these sharp creases may create spurious clusters of vertices in the final reconstructed surface mesh when setting a small mesh sizing or surface approximation error parameter (see Figure~\ref{Surface_reconstruction_points_3-fig-contouring_bad}).\\
|
||||
One way to avoid these spurious clusters consists of adjusting the mesh sizing and surface approximation parameters large enough compared to the average sampling density (obtained through \ccc{CGAL::compute_average_spacing()}) so that the contouring algorithm "perceives" a smooth iso-surface. We recommend to use the following contouring parameters:
|
||||
One way to avoid these spurious clusters consists of adjusting the mesh sizing and surface approximation parameters large enough compared to the average sampling density (obtained through \ccc{CGAL::compute_average_spacing()}) so that the contouring algorithm ``perceives'' a smooth iso-surface. We recommend to use the following contouring parameters:
|
||||
\begin{itemize}
|
||||
\item Max triangle radius: at least 100 times the average spacing.
|
||||
\item Approximation distance: at least 0.25 times the average spacing.
|
||||
|
|
|
|||
|
|
@ -255,7 +255,7 @@ the consistency of the constrained marks in edges.}
|
|||
|
||||
\ccFunction{ostream & operator<<(ostream& os, const Constrained_triangulation_2<Traits,Tds> &Ct);}
|
||||
{Writes the triangulation as for \ccc{CGAL::Triangulation_2<Traits,Tds>} and, for each face f, and integers i=0,1,2,
|
||||
write "C" or "N" depending whether edge
|
||||
write ``C'' or ``N'' depending whether edge
|
||||
\ccc{(f,i)} is constrained or not.}
|
||||
|
||||
\ccFunction{istream& operator>>(istream& is,Constrained_triangulation_2<Traits,Tds> Ct& t);}
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ in the geometric embedding, and there is only one finite vertex
|
|||
remaining. The two vertices are adjacent.
|
||||
\item \emph{dimension -1.} This dimension is a convention to represent a
|
||||
0-dimensional simplex, that is a sole vertex, which will be
|
||||
geometrically embedded as an "empty" triangulation, having only one
|
||||
geometrically embedded as an ``empty'' triangulation, having only one
|
||||
infinite vertex.
|
||||
\item \emph{dimension -2.} This is also a convention. The
|
||||
triangulation data structure has no vertex. There is no associated
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ Below we present its interface.
|
|||
\ccMethod{Delaunay_edge dual();}{Returns the
|
||||
corresponding dual edge in the Delaunay graph.}
|
||||
|
||||
In the four methods below we consider Voronoi halfedges to be "parallel"
|
||||
In the four methods below we consider Voronoi halfedges to be ``parallel''
|
||||
to the $x$-axis, oriented from left to right.
|
||||
|
||||
\ccMethod{Delaunay_vertex_handle up();}{Returns a handle to the vertex in
|
||||
|
|
|
|||
Loading…
Reference in New Issue