World
Wide Guide | Knowledge Bank
| Kukushkin's Notebook |
Design Fundamentals
The previous section dealt
with placing 3-D objects into the scenery and did not pay attention to the
code that actually draws objects. Here, some aspects of writing such a code
are described.
Basically, drawing a 3D object consists of 2 tasks: drawing its primitives and drawing them in the right order. The second task is often much more difficult than the first one.
For simple objects, like convex polyhedra, the drawing order of primitives is not important -- the object is always displayed correctly. Some more complex objects can be displayed correctly simply by putting drawing instructions in the right order. For example, a hangar can be successfully implemented by displaying its inner walls first.
More complex objects cannot be rendered correctly from all view angles with a fixed order of drawing instruction. In such cases, the VectorJump() instruction should used to change the execution order according to the viewpoint location. This instructions accepts an imaginary plane as a parameter and performs a jump if the viewpoint is "behind" this plane. This allows to implement the 'cutting' algorithm described in FS5FACTS.TXT. A typical code should look like this:
VectorJump( :That_side ... )
Call( :That_piece )
Call( :This_piece ) Jump( :Done )
:That_side
Call( :This_piece ) Call( :That_piece ) Jump( :Done )
:This_piece
... ; Drawing instructions for one piece of the object
Return
:That_piece
... ; Drawing instructions for the other piece
Return
:Done
Note the usage of subroutines for drawing pieces of the object. Without them, the drawing code for each piece would have to be coded twice. This is especially important when these subroutines contain further VectorJump()s to resolve visibility problems within pieces.
Choosing the imaginary plane for VectorJump() can be difficult, because the two resulting pieces are not allowed to cross it. Sometimes it is even necessary to cut some graphic primitives into 2 parts. The plane should be chosen so that the minimal number of primitives (or better none) have to be cut into parts. The same applies to resolving visibility problems within pieces. The next priority after minimizing the number of primitives is to minimize the number of VectorJump()s. Here, the advantages of one-sided polygons and the code with a fixed order of execution (like the example with a hangar mentioned above) should be used to a full extent. VectorJump()s can sometimes create an illusion that fully covered object parts are not drawn at all. This is not true when using this algorithm. All parts are always drawn, so the only optimization that can be done is to reduce the overhead produced by VectorJump()s.
Having to test the visibility with VectorJump()s increases the size of the BGL code and slightly slows it down, as compared to a similar code without VectorJumps. In some cases, an incorrect drawing order creates only minimal disturbances that can hardly be seen. In these cases, it may be justified not to use VectorJump()s, thus minimizing the overhead.
Visibility conflicts between primitives always having the same color and intensity can always be neglected, but one should keep in mind that the intensity of polygons can vary with their orientation.
Visibility conflicts between thin lines, dotted lines and single dots can also often be neglected.
Obviously, bottom sides of solid objects sitting on the ground should never be drawn because they are never visible.
Sometimes polygons composing a 3-D object are decorated with lines and dots. For example, airport towers often have a blinking white-green light. While polygons are visible only from one side, such lines and polygons are visible from all sides. This can cause problems like decorations from the opposite side of the object 'seen' through the object. It is a good practice to put a VectorJump() before drawing such decorations in order to ensure they are visible only on the visible side of the polygon.
SDL allows implementing subroutines. They are used for special drawing modes, like shadows or 3-D objects, and for reducing the code size by using the same drawing instructions more than once. Subroutines can be nested. A subroutine must reside in the same Area() block as the calling instruction, similar to jump destinations. There is no way to share a subroutine between Area()-blocks without duplicating it in each block.
Each subroutine MUST end with a Return() instruction. Missing or misplaced Returns are one of the most common errors in scenery design. They often lead to database errors and even lock-ups of FS5.
Special drawing modes are discussed elsewhere in this document. Here, re-using of drawing instructions is discussed.
Sometimes, the same sequence of drawing instructions is used to draw different objects or different parts of an object. In this case, it is often useful to put these instructions into a subroutine and call it several times from different locations, thus reducing the code size. The main problem here is that delta coordinates are hardcoded into drawing instructions, so calling a routine twice using a usual Call() does not make much sense each primitive would simply be drawn twice at the same location. The solution is to modify the delta coordinate system before calling a subroutine. Such modification can be made either by calling it from different RefPoints or by using special call instructions that execute the routine in a transformed coordinate system.
Using different RefPoints is most suitable for displaying similar objects. The PerspectiveCalled code for both (or more) objects should in this case use a regular Call() to call the drawing routine after defining the RefPoint:
PerspectiveCall( :Object1 ) ; No ShadowCall()s here for
simplicity
PerspectiveCall( :Object2 )
...
:Object1
Perspective
RefPoint( ... )
Call( :DrawObject )
Return
...
:Object2
Perspective
RefPoint( ... )
:DrawObject ;Spare the Call() in this object by putting the routine
here
... ; Routine to draw Object1 and Object2
Return
For simple objects, some customization can be done between
the RefPoint and the Call() instruction. For example, a surface color can
be selected or a texture file loaded. The routine should in this case assume
that these parameters have already been set to proper values and simply use
them. It is important to do all such customization inside the PerspectiveCalled
routine.
This approach requires that both (or more) objects share the same Area() block. This is not a problem, because one of main goals of dividing the scenery into Area()Blocks is to reduce the size of the scenery that has to be kept in memory. By repeating drawing instructions only once, this size is reduced so much that it normally justifies the overhead caused by always having both (or none) objects in memory. However, this approach should mainly be used for objects that are relatively close to each other in order not to over-increase the visibility range for the Area() block.
Another approach is to call the routine with a transformed coordinate system. It is useful for drawing similar parts of the same object. There are 2 special call instructions, RotatedCall() [FSASM:RotCall] and TransformCall() [FSASM:TransRotCall], that allow doing this.
RotatedCall() takes 3 angles as parameters. These angles specify rotation angles around the 3 axes of the delta coordinate system. The subroutine is then called with a rotated coordinate system. It must end with a usual Return() instruction, which will also restore the original coordinate system.
TransformCall() takes as parameters 3 displacement values, 3 rotation angles and 3 variables. It is similar to RotatedCall(), but after the rotation, the origin of the coordinate system is also moved as specified by the displacement values. The use of variables is described in the section concerning animated objects, their addresses should normally be set to 0.
Points() defined outside of these routines are not rotated by these instructions and can be used from inside. Points() defined inside these routines are not rotated back and can be used later too. However, the texture mapping engine is not fully rotated, thus textures loaded outside should never be used inside and vice versa.
No RefPoints and RefPoint-similar instructions (like SetScale) should be used inside routines called with transformed coordinates. Also, these transformations sometimes cause problems with crash detection and shadows, as described later in this section. In general, rotation around the Z axis normally does not lead to problems. Other transformations can.
In particular, RotatedCall()s around the Z axis can be used to display (approximated) cylindrical objects by calling a set of instructions for drawing its sector under different rotation angles. This reduces both the code size and the need to do boring computations for rotated point coordinates.
3-D objects should normally produce shadows. They are normally displayed using the ShadowCall() instruction. This instruction calls a subroutine in a special mode where instead of polygons, their shadows are drawn.
Lines and single dots do not produce shadows.
There are two approaches to displaying a shadow. It can either be displayed from outside of a PerspectiveCalled routine:
PerspectiveCall( :P ) ; Method 1
ShadowCall( :S ) Jump( :Done )
:P
Perspective
:S ; Labels are not instructions, so they can be inserted
between
; Perspective and RefPoint()
RefPoint( ... )
... ; Drawing instructions
Return
:Done
This method does not cast shadows on other 3-D objects, because
the shadow here is basically a 2-D object that is drawn earlier. Another
approach is displaying the shadow from inside of the routine:
PerspectiveCall( :P ) ; Method 2 Jump( :Done )
:P
Perspective
RefPoint( ... )
ShadowCall( :S )
:S
... ; Drawing instructions
Return ; No second return needed, but this one will be executed
twice
:Done
This method casts shadows on other 3-D objects in most cases.
However, shadows are always drawn as projections on the horizontal ground
surface. So they appear correctly only when the viewpoint is near the line
connecting the RefPoint of the object with the virtual sun.
Both methods do not cast realistic shadows on other 3-D objects. I have no opinion regarding which of them is better. The default scenery uses method 1, while many freeware sceneries prefer method 2.
All primitives used in a ShadowCalled routine should be drawn using points defined with Points() or VecPoints(). Instructions that accept explicit coordinates, like MoveTo( x y z ) or DrawTo() will not be projected correctly and can even cause database errors! The reason for this could be that the projection onto the ground surface is actually performed by the Points() instruction, and all primitives are then drawn using already projected coordinates.
Default buildings internally execute a Points() instruction, so they produce correct shadows. This also means that some or all pre-defined points are overwritten after a Building() instruction is executed.
Nonzero Z displacements and rotations around the X and Y axes in RotatedCall()s and TransformedCall()s transform the ground surface so that shadows leave the ground and appear in the air from certain viewing angles. For this reason, such transformations should never be used for shadow-casting objects or parts of them. This is especially important for animated objects.
Besides drawing themselves, 3-D objects are also responsible for the crash detection. Crash detection is normally implemented by checking either the aircraft or the viewpoint to be inside a specific area. In case of a positive result, the appropriate crash code is written into the crash detection variable (0284, FSASM:vCrashFlag).
Because of the unreliability of viewpoint-based crash detection, there are two families of crash codes. Some codes trigger a crash event only from a cockpit view, where the viewpoint is located inside the aircraft, while others do it always. Crash codes (all numbers decimal) 4(crash), 14 (building), 16, 18, 20 (collisions) work always and thus should NOT be used for viewpoint-based crash detection. Most other crash codes work only from the cockpit view. There is another code for building crash (6) that can be used for viewpoint-based building crash detection. Sometimes it is useful to combine both kinds of crash detection. In such cases, aircraft-based crash codes can ONLY be used if the aircraft-based crash detection is not affected by the viewpoint-based detection.
The aircraft-based crash detection is done using the Monitor3D() [FSASM: Jump3Ranges] instruction. A typical code should look like this:
Monitor3D( :No_crash ... )
SetVar( 0284 14 ) ; The planetarium was insured :No_crash
The viewpoint-based crash detection is done either by testing variables or using the VectorJump() instruction. The variables of interest are 037E, 0382, 0386 (delta coordinates of the viewpoint, FSASM:vDeltaE, vDeltaA, vDeltaN). A typical code should look like this:
; FS5.0: use 3 IfVarRange()s instead
IfVarRange3( :No_crash 37E ... ... 382 ... ... 386 ... ... ); FS5.1
only
SetVar( 0284 6 ) ; The viewpoint-based building crash code
:No_crash
Using the VectorJump() instruction, it is possible to check
if the viewpoint is inside a convex polyhedron. This is very useful for mountains
and other big objects that are not rectangular. This check is done by executing
a separate VectorJump for each side of the polyhedron that would check if
the viewpoint is "behind" it (which means visible from its inner side). The
viewpoint is inside the polyhedron if and only if it is visible from all
its inner sides. A typical code should look like this:
VectorJump( :No_crash ... ) ; 1st side
...
VectorJump( :No_crash ... ) ; Last side
SetVar( 0284 2 ) ; The only way to test mountain crashes :No_crash
If the lower side of the polyhedron is horizontal and at the synth floor altitude (like in a mountain), the VectorJump() for it should be omitted because the aircraft can never get below the ground and each unnecessary instruction reduces the performance without any gains.
Viewpoint-based crash detection instructions respect the scale factor of the RefPoint and transformation made by RotatedCall()s/TransformCall()s. Unlike that, Monitor3D() is NOT affected by the scale factor. The unit here is always 1 meter. Also, Monitor3D() is not affected by all transformations made by RotatedCall/TransformCall except for rotations around the Z axis. The possibility of a rotation around the Z axis looks more like a last-minute bugfix for terminal buildings that have many parts rotated this way.
Default buildings displayed by the Building() instruction execute Monitor3D() internally in order to detect crashes. For this reason, they should always be displayed using the RefPoint scale factor 1, otherwise the size of the crash detection area would not match the size of the visible building. Also, they should never be moved using TransformCall() or rotated around horizontal axes (the latter is difficult to imagine anyway). While the visible part of the building would indeed be transformed, its crash detection would not, sometimes leading to unexplainable crashes. Specifying nonzero dx..dz coordinates in the Building() instruction itself is safe.
The crash detection for complex objects should be divided into parts, so that it could be done using one of described methods for each part.
A special problem for detecting crashes are objects with no, or small volumes, like radio masts. Here, it is very unlikely that the RefPoint of the aircraft would actually be inside the object, so the size of the aircraft cannot be neglected. The crash detection for such objects is normally done by defining a cuboid crash detection area around them using Monitor3D(). Determining the radius of this area is a major problem, because it should depend from the wingspan and the size of the current aircraft. Because the scenery does not know aircraft dimensions, it has to make some assumptions about it. My opinion is that the size of the default Cessna can be used here.
Objects in FS5 are normally not landable, because FS5 only knows how to draw them, not which surfaces they have. A landing attempt on such an object would simply cause the aircraft to descend inside the object, most likely triggering a crash event. This is normally not a problem, because most objects in FS5 are too small to land on them anyway. However, some objects, like aircraft carriers, should contain landable surfaces above the ground level defined in the synth scenery. FS5 allows the definition of so-called elevated surfaces that can be used to solve such problems. They are specified by defining horizontal polygons above the ground. An aircraft cannot descend or fall through such a surface it would either land or crash on it.
It is important to understand that elevated surfaces do not affect the ground altitude, as the altitude parameter in synth tiles does. Their only effect is preventing aircraft from descending through them. Thus, an aircraft flying below an elevated surface would neither be put onto it nor trigger a crash event. If the area below an elevated surface should be crashable, usual crash detection algorithms from section 9 should be used. It is even possible to define a stack of elevated surfaces by defining multiple surfaces with different elevation at the same location. Each of them would be landable and prevent aircraft from falling through.
Unlike other properties of an object, elevated surfaces are defined not in the visual scenery, but in the section 16 of the BGL file. This is very reasonable, because otherwise the aircraft would fall through such a surface if all view windows would be closed at some moment.
Using section 16 instructions can be difficult, especially when developing macros that accept object coordinates as parameters. A typical code for an elevated surface should look like this:
Area16( ... ... ... ... ) RefPoint( 2 : 1 ... ... E=0 )
SenseBorder( : ... ) ; Is the aircraft inside the elevated surface?
SetElevation( ... ) ; Define an elevated surface, if so End16
The Area16() [FSASM:SurfaceGroup+EndSurfaceGroup] instruction uses a syntax different from Area() used for the visual scenery. Instead of a location and a visibility range, a rectangular area where the SDL code should be executed has to be specified. This could lead to problems when developing movable SCASM [1.6] macros, because this rectangle cannot be calculated from within the macro. However, FS5 executes this code in a slightly larger area than the rectangle specified. The code becomes active when the distance between the aircraft and the rectangle becomes less than approx. 7 NM. So it is possible to specify equal latitude and longitude values for opposite sides of the rectangle, because most elevated surfaces would be within the 7NM area. This allows inserting Area16()-blocks into SCASM macros that can be called with a single lat/lon-pair as parameter. The code in the Area16() block should check whether the horizontal aircraft coordinates are within the polygon that defines the elevated surface and, if so, call the SetElevation() instruction to set the elevation for the surface.
The complexity of this code is very limited by the fact that the total size of the SDL code in Area16() cannot exceed 240 bytes and 64 bytes are always eaten up by the RefPoint(). Also, not all instructions work here. SCASM [v1.6] ignores all instructions in Area16() that are not explicitly mentioned in the manual. If a definition for a complex elevated surface cannot fit into 240 bytes, it should be split into simpler parts and separate Area16()s should be used for them.
Elevated surfaces are always smooth. The SurfaceType() instruction cannot be used to make such a surface rough or water.
Runways on elevated surfaces should be displayed from the same PerspectiveCalled routine as the rest of the object. RunwayCall() should NOT be used. Because the RunwayData() instruction is called directly here, it changes the RefPoint. So if RunwayData() is not the last instruction in the object drawing routine, the RefPoint must be set up again after its execution.
Also, while only the top side of the runway polygon is visible, runway lights are visible from all sides and can thus show through the object when the viewpoint is below the runway. For this reason, VectorJump() should be used in order to avoid drawing the runway above the viewpoint.
The easiest way of creating objects is to construct them from default buildings. Simply putting several PerspectiveCalled Building()s close to each other often creates visibility conflicts between individual buildings. Because of this, such objects should normally be drawn using a single PerspectiveCalled routine, and VectorJump()s should be used to resolve visibility problems.
In an object composed of Building()s, some walls can be completely inside the object and thus never visible. Drawing these walls would decrease performance and could also lead to more visibility conflicts. The Building() instruction contains a "flags" parameter that specifies which walls are to be drawn. By leaving certain bits as 0, FS5 can be instructed not to display such walls.
In order to increase the performance, the graphics engine displays buildings that are certain distance away as thin poles. This distance depends from the building size. Unfortunately, FS5 calculates it in a very strange way so that many buildings are reduced to poles long before they become 1 pixel wide. This effect is very unrealistic, especially in objects composed of multiple Buildings() where different parts would shrink to poles at different distances. It can be disabled by setting the variable 033B (distance to RefPoint in meters) to 0. Because FS5 uses this variable to test the distance, it would be tricked into thinking that the building is very close to the observer.
Every RefPoint() instruction re-initializes this variable, so no side effects should be expected. However, setting it to 0 would cause buildings to be displayed as textured objects from any distance. Because textured objects are drawn much slower than thin lines, having many such building visible at one time could significally reduce the frame rate. For this reason, the V1= (visibility) parameter of the RefPoint should be set to a realistic value in order to avoid unnecessary drawing. Also, the range parameter of the Area() instruction should be set to a realistic value in order to avoid unnecessary PerspectiveCall()s.
When using the Building() instruction, one should not forget its limitations mentioned earlier in this section.