In this section we describe the algorithms used in the reconstruction. Many of them make use of the concepts presented earlier. The order these algorithms are presented follows the actual process, from the 2D image in the beginning to the 3D model at the end.
The first step is to perform a smoothing algorithm in the input image. Then we run an edge detector on the result. Our implementation uses a Gaussian smoothing algorithm and the Canny edge detector [1,10]. The edge detector is modified in order to break edges at points of high curvature. The result of this operation is a number of sets of pixels. Each set consists of pixels which form an almost straight line on the image.
The final step is to calculate the best fitting line for each set of pixels.
In order to achieve this we use a Maximum Likelihood estimation
(MLE) technique based on the LevenbergMarquardt algorithm [20].
The algorithm steps are shown on table 2.
The minimization function used is shown on table 3.
These types of algorithms are usually very time consuming. Fortunately in our case, we can start with a very good estimation. We use the first and the last pixel on the edge. These two points are probably very near the best line, due to the way the edge is traced in the previous steps. Given this, the MLE converges very fast.
This alternative method uses of a Hough transformation [21] after an unmodified Canny edge detector. The approach was abandoned for various reasons.
Tests with the hough version of the implementation and with the final MLE version, have shown that the later can be faster up to two or three ordersofsize depending on the input image.
The next step in the reconstruction process is to identify the from the intersections of the detected lines. In order to accomplish this step successfully, user guidance is necessary.
Various techniques [13,17] have been proposed in order to achieve an automated detection of the . These methods provide ways of detecting more lines in the scene, based on patterns, and geometric structures. They do not try to distinguish between the lines that will lead to a correct result and to those which will not. The immediate consequence is that the `false lines' cause a high rate of false detections. This leaves the final selection of the correct from a set of candidates to the user.
For this reason in our implementation a simple RANSAC like algorithm was chosen, over the proposed more complex techniques. The steps of the process are shown on table 4. The algorithm for choosing the most supported intersection point from all the intersections is shown on table 5.
In section 3.2 we saw that we can compute the Homography matrix with the equation 3. In order to completely define we have to know some lengths on the scene. Also if we want to make measurements of heights (distances between horizontal planes and the ground) we must calculate the scale factor for the vertical direction.
We need the following information:
Given a reference length on the ground, we calculate its two components on the coordinate system of . is the center of the world on the image with coordinates .
Let the two segments be:
Then the equation 15 is transformed to :
With equations 16 and 17 we obtain the last two unknown scale factors and . Now we can compute any distance between objects on the ground plane as well as heights. The equations for these calculations where described on section 3.4.
At this point we have enough information to know the position of the camera. The coordinates are derived directly from the VPs as shown in eq. 18
If and are the points in the world for and respectively:
is a point on the ground.
is the image of on the ground.
We can measure the distance (using equation 6).
If and are the points in the world for and then both of them are on the ground plane.
The above are sufficient to recreate a scene with acceptable detail, if we have a method of identifying objects in the image. Various techniques are available for object segmentation and identification [5,3,9]. The basic structures that need to be defined before we can regain a 3D model are the following:

Automatic segmentation and object identification is beyond the scope of this work. In the next section, a user guided procedure is described, for identifying lines and planes in an image.