Creating Your Device for game
Creating Your Device
For the rest of the development of this game, you should still be working with the project you started in the last chapter. One of the first things you want to do now is set up the project to actually work with the sample framework. In the Main method you created in the last chapter, add the code in Listing 4.1 immediately following the creation of the GameEngine class.
Listing 4.1. Hooking Events and Callbacks
// Set the callback functions. These functions allow the sample framework // to notify the application about device changes, user input, and Windows // messages. The callbacks are optional so you need only set callbacks for // events you're interested in. However, if you don't handle the device // reset/lost callbacks, then the sample framework won't be able to reset // your device since the application must first release all device resources // before resetting. Likewise, if you don't handle the device created/destroyed // callbacks, then the sample framework won't be able to re-create your device // resources. sampleFramework.Disposing += new EventHandler(blockersEngine.OnDestroyDevice); sampleFramework.DeviceLost += new EventHandler(blockersEngine.OnLostDevice); sampleFramework.DeviceCreated += new DeviceEventHandler(blockersEngine.OnCreateDevice); sampleFramework.DeviceReset += new DeviceEventHandler(blockersEngine.OnResetDevice); sampleFramework.SetKeyboardCallback(new KeyboardCallback( blockersEngine.OnKeyEvent)); // Catch mouse move events sampleFramework.IsNotifiedOnMouseMove = true; sampleFramework.SetMouseCallback(new MouseCallback(blockersEngine.OnMouseEvent)); sampleFramework.SetCallbackInterface(blockersEngine);
A lot of things are happening in this small section of code. A total of four events are being hooked to let you know when the rendering device has been created, lost, reset, and destroyed. You'll need to add the implementations for these handlers in a moment from Listing 4.2. After that, you'll notice that you're hooking two callbacks the sample framework has for user input, namely the keyboard and mouse (Listing 4.3). Finally, you call the SetCallbackInterface method passing in the game engine instance; however, you might notice that the instance doesn't implement the correct interface. You'll need to fix that as well.
Listing 4.2. Framework Event Handlers
////// This event will be fired immediately after the Direct3D device has been /// created, which will happen during application initialization and /// windowed/full screen toggles. This is the best location to create /// Pool.Managed resources since these resources need to be reloaded whenever /// the device is destroyed. Resources created here should be released /// in the Disposing event. /// private void OnCreateDevice(object sender, DeviceEventArgs e) { SurfaceDescription desc = e.BackBufferDescription; } ////// This event will be fired immediately after the Direct3D device has been /// reset, which will happen after a lost device scenario. This is the best /// location to create Pool.Default resources since these resources need to /// be reloaded whenever the device is lost. Resources created here should /// be released in the OnLostDevice event. /// private void OnResetDevice(object sender, DeviceEventArgs e) { SurfaceDescription desc = e.BackBufferDescription; } ////// This event function will be called fired after the Direct3D device has /// entered a lost state and before Device.Reset() is called. Resources created /// in the OnResetDevice callback should be released here, which generally /// includes all Pool.Default resources. See the "Lost Devices" section of the /// documentation for information about lost devices. /// private void OnLostDevice(object sender, EventArgs e) { } ////// This callback function will be called immediately after the Direct3D device /// has been destroyed, which generally happens as a result of application /// termination or windowed/full screen toggles. Resources created in the /// OnCreateDevice callback should be released here, which generally includes /// all Pool.Managed resources. /// private void OnDestroyDevice(object sender, EventArgs e) { }
Listing 4.3. User Input Handlers
///Hook the mouse events private void OnMouseEvent(bool leftDown, bool rightDown, bool middleDown, bool side1Down, bool side2Down, int wheel, int x, int y) { } ///Handle keyboard strokes private void OnKeyEvent(System.Windows.Forms.Keys key, bool keyDown, bool altDown) { }
Now, the SetCallbackInterface method you called earlier expects a variable of type IFrameworkCallback, and you passed in the game engine class, which does not implement this type. You can fix this easily by changing the game engine class declaration:
public class GameEngine : IFrameworkCallback, IDeviceCreation
Of course, now you need to add the implementation of the two methods this interface defines (Listing 4.4).
Listing 4.4. Framework Callback Interface
////// This callback function will be called once at the beginning of every frame. /// This is the best location for your application to handle updates to the /// scene but is not intended to contain actual rendering calls, which should /// instead be placed in the OnFrameRender callback. /// public void OnFrameMove(Device device, double appTime, float elapsedTime) { } ////// This callback function will be called at the end of every frame to perform /// all the rendering calls for the scene, and it will also be called if the /// window needs to be repainted. After this function has returned, the sample /// framework will call Device.Present to display the contents of the next /// buffer in the swap chain /// public void OnFrameRender(Device device, double appTime, float elapsedTime) { }
With that boilerplate code out of the way, you're ready to start doing something interesting now. First, you should tell the sample framework that you're ready to render your application and to start the game engine. Go back to the Main method, and add the code in Listing 4.5 immediately after your SetCallbackInterface call.
Listing 4.5. Starting the Game
try { #if (!DEBUG) // In retail mode, force the app to fullscreen mode sampleFramework.IsOverridingFullScreen = true; #endif // Show the cursor and clip it when in full screen sampleFramework.SetCursorSettings(true, true); // Initialize the sample framework and create the desired window and // Direct3D device for the application. Calling each of these functions // is optional, but they allow you to set several options that control // the behavior of the sampleFramework. sampleFramework.Initialize( false, false, true ); sampleFramework.CreateWindow("Blockers - The Game"); sampleFramework.CreateDevice( 0, true, Framework.DefaultSizeWidth, Framework.DefaultSizeHeight, blockersEngine); // Pass control to the sample framework for handling the message pump and // dispatching render calls. The sample framework will call your FrameMove // and FrameRender callback when there is idle time between handling // window messages. sampleFramework.MainLoop(); } #if(DEBUG) catch (Exception e) { // In debug mode show this error (maybe - depending on settings) sampleFramework.DisplayErrorMessage(e); #else catch { // In release mode fail silently #endif // Ignore any exceptions here, they would have been handled by other areas return (sampleFramework.ExitCode == 0) ? 1 : sampleFramework.ExitCode; // Return an error code here }
What is going on here? The first thing to notice is that the entire code section is wrapped in a TRy/catch block, and the catch block varies depending on whether you're compiling in debug or release mode. In debug mode, any errors are displayed, and then the application exits. In release mode, all errors are ignored, and the application exits. The first thing that happens in the block is that the sample framework is told to render in full-screen mode if you are not in debug mode. This step ensures that while you're debugging, the game runs in a window, allowing easy debugging, but when it's complete, it runs in full-screen mode, as most games do.
The next call might seem a little strange, but the basic goal of the call is to determine the behavior of the cursor while in full-screen mode. The first parameter determines whether the cursor is displayed in full-screen mode, and the second determines whether the cursor should be clipped. Clipping the cursor simply ensures that the cursor cannot leave the area of the game being rendered. In a single monitor scenario, it isn't a big deal either way, but in a multimon scenario, you wouldn't want the user to move the cursor to the other monitor where you weren't rendering.
The Initialize call sets up some internal variables for the sample framework. The three parameters to the call determine whether the command line should be parsed (no), whether the default hotkeys should be handled (no again), and whether message boxes should be shown (yes). You don't want the game to parse the command line or handle the default hotkeys because they are normally reserved for samples that ship with the DirectX SDK. The CreateWindow call is relatively self-explanatory; it creates the window where the rendering occurs with the title listed as the parameter.
Finally, the CreateDevice call is created. Notice that this is where you pass in the instance of the game engine class for the IDeviceCreation interface. Before the device is created, every combination is enumerated on your system, and the IsDeviceAcceptable method that you wrote in the last chapter is called to determine whether the device is acceptable to you. After the list is created, the ModifyDevice method is called to allow you to modify any settings right before the device is created. The constructor that you use for the device creation looks like this:
public Device ( System.Int32 adapter , Microsoft.DirectX.Direct3D.DeviceType deviceType , System.IntPtr renderWindowHandle , Microsoft.DirectX.Direct3D.CreateFlags behaviorFlags , params Microsoft.DirectX.Direct3D.PresentParameters[] presentationParameters )
The adapter parameter is the ordinal of the adapter you enumerated earlier. In the majority of cases, it is the default parameter, which is 0. The deviceType parameter can be one of the following values:
- DeviceType.Reference A device using the reference rasterizer. The reference rasterizer performs all calculations for rendering in software mode. Although this device type supports every feature of Direct3D, it does so very, very slowly. You should also note that the reference rasterizer only ships with the DirectX SDK. End users will most likely not have this rasterizer installed.
Obviously, because you are dealing with graphics, you need someplace to actually show the rendered image. In this overload of the device constructor, the sample framework uses the window it has created, but you could pass in any control from the System.Windows.Forms library that comes with the .NET framework. Although it won't be used in this game, you could use any valid control, such as a picture box.
The next parameter is the behavior of the device. You might remember from the last chapter when you determined the best set of flags, including whether the device supported transforming and lighting in hardware and whether the device can be pure. You find the possible value of this parameter by combining the values in Table 4.1.
Used for multimon-capable adapters. Specifies that a single device will control each of the adapters on the system. | |
Tells Direct3D to handle the resource management rather than the driver. In most cases, you will not want to specify this flag. | |
Tells Direct3D that a combination of hardware and software vertex processing will be used. This flag cannot be combined with either the software or hardware vertex processing flags. | |
Tells Direct3D that all vertex processing will occur in hardware. This flag cannot be combined with the software or mixed vertex processing flags. | |
Tells Direct3D that all vertex processing will occur in software. This flag cannot be combined with the hardware or mixed vertex processing flags. | |
Specifies that this device will be a pure device. | |
Specifies that this device may be accessed for more than one thread simultaneously. Because the garbage collector runs on a separate thread, this option is turned on by default in Managed DirectX. Note that there is a slight performance penalty for using this flag. | |
For this game, you should stick with either software or hardware vertex processing only, which is what the enumeration code picked during the last chapter. The final parameter of the device constructor is a parameter array of the PresentParameters class. You only need more than one of these objects if you are using the CreateFlags.AdapterGroupDevice flag mentioned earlier, and then you need one for each adapter in the group.
Construction Cue
Before the device is actually created by the framework, you might notice that it turns off the event-handling mechanism for the Managed Direct3D libraries. |
Before you go on, it's important to understand why turning off the event-handling model is a good idea. The default implementation of the Managed DirectX classes hooks certain events on the Direct3D device for every resource that is created. At a minimum, each resource (such as textures or a vertex buffer) hooks the Disposing event and more likely also hooks other events, such as the DeviceLost and DeviceReset events. This step happens for maintaining object lifetimes. Why wouldn't you want this great benefit in your application?
The main reason is that this benefit comes at a cost, and that cost could potentially be quite large. To understand this point, you must first have a sense of what is going on behind the scenes. You can look at this simple case, written here in pseudo-code:
SomeResource res = new SomeResource(device); device.Render(res);
As you can see, this code looks harmless enough. You simply create a resource and render it. The object is obviously never used again, so the garbage collector should be smart enough to clean up the object. This thought is common, but this thought is incorrect. When the new resource is created, it hooks a minimum of one event on the device to allow it to clean up correctly. This hooking of the event is a double-edged sword.
One, there is an allocation of the EventHandler class when doing the actual hook. Granted, the allocation is small, but as you will see in a moment, even small allocations can add up quickly. Second, after the event is hooked, the resource has a hard link to the device. In the eyes of the garbage collector, this object is still in use and will remain in use for the lifetime of the device or until the events are unhooked. In the pseudo-code earlier, imagine if this code were run once every frame to render something. Imagine that your game was pushing around a thousand frames per second, and imagine it was running for two minutes. You've just created 120,000 objects that will not be collected while the device is around, plus another 120,000 event handlers. All these created objects can cause memory consumption to rise quickly, as well as extra garbage collections to be performed, which can hurt performance. If your resources are in video memory, you can be assured you will run out quickly.
This scenario doesn't even consider what happens when the device is finally disposed. In the preceding example, when the device is disposed, it fires the Disposing event, which has been hooked by 120,000 listeners. You can imagine that this cascading list of event handlers which must be called will take some time, and you'd be correct. It could take literally minutes and cause people to think the application has locked up.
You only want to use the event handling that is built in to Managed Direct3D in the simplest of cases. At any point where you care about memory consumption or performance (for example, in games), you want to avoid this process, as you've done in this example (or at the very least ensure that you are disposing of objects properly). The sample framework gives you the opportunity to do so.
You'll notice that last method you called in the Main method is the MainLoop method from the sample framework. This point is where you are telling the sample framework that you're ready to run your application now. This method will run and process Windows messages as well as call your rendering methods constantly until the application exits. This method will not return until it happens. From now on, all interaction with the sample framework comes from the events it fires and the callbacks it calls into.
A few times throughout the course of the code, you might need to know the name of the game. You can simply add this constant to your game engine class:
public const string GameName = "Blockers";
Time to Render
Well, you have your device created now, so you can start getting something rendered onscreen. You already have the rendering method where you can write all the code needed to render your scene, so add the code in Listing 4.6.Listing 4.6. The Render Method
bool beginSceneCalled = false; // Clear the render target and the zbuffer device.Clear(ClearFlags.ZBuffer | ClearFlags.Target, 0, 1.0f, 0); try { device.BeginScene(); beginSceneCalled = true; } finally { if (beginSceneCalled) device.EndScene(); }Before you can do any actual rendering, you most likely want to clear the device to prepare it for a new frame. In this case, clear the render target (ClearFlags.Target) to a particular color and the depth buffer (ClearFlags.ZBuffer) to a particular depth. The second parameter is the color you want to clear the device with. It can be either a member of the System.Drawing.Color object (or any of its many built-in colors) or an integer value in the format of 0xAARRGGBB. In this case, each component of the color (alpha, red, green, and blue) can be represented by two hexadecimal digits, ranging from 00 to FF, where 00 is completely off and FF is completely on. The preceding call uses 0, which is the same as 0x00000000, or everything completely off or black. If you want to clear to blue using an integer value, you use 0x000000ff; for green, 0x0000ff00; or any other combination you feel appropriate.The third parameter is the new value of every member of the depth buffer. A common value here is to reset everything to a depth of 1.0f. The last parameter is for clearing a stencil, which hasn't been created for this game, so I do not discuss it at this time. After the device is cleared, you want to inform Direct3D that you are ready to begin drawing, which is the purpose of the BeginScene call. Direct3D sets up its internal state and prepares itself for rendering your graphics. For now, you don't have anything to render, so you can simply call EndScene to let Direct3D know you're done rendering. You must call EndScene for every call to BeginScene in your application, and you cannot call BeginScene again until after you've called EndScene. This is why the TRy/finally block ensures that the EndScene method is called for every BeginScene call.Just because you've notified Direct3D that you're done rendering doesn't actually mean Direct3D updates the display. It's entirely possible for you to have multiple render targets, and you don't want the display updated until you've rendered them all. To handle this scenario, Direct3D splits up the ending of the scene and the updating of the display. The Present call handles updating the display for you; it is called automatically by the sample framework. The overload used in the framework updates the entire display at once, which is the desired behavior for your game. The other overload allows you to update small subsections of the entire display; however, for your beginning simple games, you never need this flexibility.If you run the application now, you should notice that the window is cleared to a black color, and that's really about it. You can have the "background" of the game be this single color if you want, but that's not exciting. Instead, you should make the background of the game be something called a "sky box." In the simplest terms, a sky box is simply a large cube with the inside six walls of the cube textured to look like the sky or whatever background your scene should have. It allows you to look around your scene from any direction and see a valid view of your background .Loading and Rendering a Mesh
1. | Determine the media path. |
2. | Declare the mesh variables. |
3. | Load the mesh and textures from files. |
4. | Render the mesh. |
Step 1 is determining the media path. In the included CD, you
will notice a Blockers folder in the media folder. This folder includes
all the game media, so you need to copy it onto your hard drive somewhere (or
use the installation application from the CD). After you copy the media to your
hard drive, you need to add an application configuration file to your project,
much as you did in previous chapters. Choose Project, Add New Item, and add an
application configuration file to your project. Replace the Extensible Markup
Language (XML) that was automatically generated with the following:
Obviously, you want the MediaPath value to match
wherever you have copied the media on your hard drive. If you used the
installation program, the value shown here will work for you. With the
configuration file in your project, you need a variable to actually access this
key, so add this variable to your game engine class:
public static readonly string MediaPath = ConfigurationSettings.AppSettings.Get("MediaPath");
For Step 2, you declare the mesh variables. One of the media
files in the folder is an X file called level.x. The X files are used
to store geometry data, which can then be easily rendered with DirectX. This
particular X file is simply your sky box, with a series of textures, marking
each wall of the "level." In Direct3D, the Mesh class will be used to
hold the actual geometry data, and the Texture class will be used to
hold each texture. A single mesh can have 0 to many textures, so you want to
declare the following variables in your game engine class:
// The information for the mesh that will display the level private Mesh levelMesh = null; private Texture[] levelTextures = null;
As you can see, you have a single Mesh object that
will be used to store the geometry and an array of textures to store the texture
for the walls. Before continuing, however, you might find yourself wondering,
what exactly is a mesh? When rendering 3D graphics, everything that is rendered
onscreen consists of one to many triangles. A triangle is the smallest closed
polygon that will always be coplanar. (Rendering primitives that are not
coplanar isn't very efficient.) You can build virtually any object by using a
large number of triangles. A mesh is simply a
data store for this triangle (or geometry) data.
Step 3 is loading the mesh and textures from files. You are now
ready to load your mesh data into the variables you just created. Take the code
in Listing 4.7, and include it in the
OnCreateDevice method you've already written.
Listing 4.7. Loading the Sky Box Mesh
// Create the mesh for the level ExtendedMaterial[] mtrls; levelMesh = Mesh.FromFile(MediaPath + "level.x", MeshFlags.Managed, device, out mtrls); // Store the materials for later use, and create the textures if ((mtrls != null) && (mtrls.Length > 0)) { levelTextures = new Texture[mtrls.Length]; for (int i = 0; i < mtrls.Length; i++) { // Create the texture levelTextures[i] = TextureLoader.FromFile(device, MediaPath + mtrls[i].TextureFilename); } }
For the mesh you are creating, you
want to load this from a file, using the media path variable that was declared
earlier in this chapter. The second parameter of this call is any flag you might
want to pass in. Because this mesh is simply static, you can use the
MeshFlags.Managed flag, which informs Direct3D that it should manage
the mesh. You also need the created device when creating this mesh, so that is
what you pass in as the third parameter. The final parameter is extended
material information about the mesh. Because a mesh can have more than one set
of material information, it is returned as an array.
After the mesh is created, ensure that the extended material
array has members. If it does, you can create the texture array, scroll through
each of the members of the extended material array, and load the appropriate
texture from a file. Notice that the texture loading method has only two
parameters: the rendering device and the location of the file. The texture
filename normally doesn't include path information, so to ensure that it loads
the texture from the correct location, it's a good idea to include the media
path here as well.
Step 4 is rendering the mesh. The actual rendering of this mesh
is pretty simple. Take the code in Listing
4.8, and include it in your render method. Place the code between the calls
to BeginScene and EndScene to ensure that it is rendered
correctly.
Listing 4.8. Rendering the Sky Box Mesh
// First render the level mesh, but before that is done, you will need // to turn off the zbuffer. This isn't needed for this drawing device.RenderState.ZBufferEnable = false; device.RenderState.ZBufferWriteEnable = false; device.Transform.World = Matrix.Scaling(15,15,15); device.RenderState.Lighting = false; for(int i = 0; i < levelTextures.Length; i++) { device.SetTexture(0, levelTextures[i]); levelMesh.DrawSubset(i); } // Turn the zbuffer back on device.RenderState.ZBufferEnable = true; device.RenderState.ZBufferWriteEnable = true; // Now, turn back on our light device.RenderState.Lighting = true;
There's actually quite a bit of new
stuff in here. First, if you remember the discussion on depth buffers earlier in
this chapter, you learned that the depth buffer stores the depth of each of the
objects in a scene. Because the sky box is a single object that will always be behind any other objects in the scene, you
can simply turn off the depth buffer and render it first. In many cases, simply
turning the depth buffer off (setting the ZBufferEnable render state to
false) is adequate; however, in some cases, drivers still write to the
depth buffer even if it is off. To handle this case, you can simply turn off
writing to the depth buffer as well.
Next, you want to actually size the sky box. The mesh that has
been loaded has all the triangle information stored in what is called object space. The mesh itself has no knowledge of any
other objects that might or might not be in a scene. To enable you to render
your objects anywhere in your scene, Direct3D provides a transform, which allows
you to move from one coordinate system to another. In this case, you want to
scale the sky box 15 units for each of the axes. (I discuss transforms more in
depth in later chapters.)
Because the sky box is textured, you don't want any lighting
calculations that the scene might need to affect the box. To ensure that no
lighting calculations are used when rendering the sky box, you simply set the
Lighting render state to false. Next, you are ready to draw
your mesh.
You'll notice here that you want to go through each texture in
your array. First, call the SetTexture method to let Direct3D know
which texture you expect to be using for rendering this portion of the mesh.
Then, you call DrawSubset on the mesh itself, passing in the index to
the texture inside the array you are currently rendering.
Adding a Camera to Your Scene
If you run the application, you notice that nothing looks
different. You still see a black screen and nothing else. You have all the code
in to load and render your sky box mesh, so why isn't it rendering? Direct3D
doesn't know exactly what it can render yet. In a 3D scene, you need to include
a camera so Direct3D knows which areas of the world it should render. Add the
code in Listing 4.9 to the
OnDeviceReset method now.
Listing 4.9. Setting Up a Camera
// Set the transformation matrices localDevice.Transform.Projection = Matrix.PerspectiveFovLH( (float)Math.PI / 4, (float)this.Width / (float)this.Height, 1.0f, 1000000.0f); localDevice.Transform.View = Matrix.LookAtLH(new Vector3(0,0,-54), new Vector3(), new Vector3(0,1,0));
Here you set two transforms on the device that control the
camera. The first is the projection matrix, which determines how the camera's
lens behaves. You can use PespectiveFovLH method to create a projection
matrix based on the field of view for a left-handed coordinate system (which is
described later in Chapter 10, "A
Quick 3D-Math Primer").
By default, Direct3D uses a left-handed coordinate system, so
it is assumed from here on that you are as well. Direct3D can render in a
right-handed coordinate system as well, but that topic isn't important for this
book. Here is the prototype for the projection matrix function:
public static Microsoft.DirectX.Matrix PerspectiveFovLH ( System.Single fieldOfViewY , System.Single aspectRatio , System.Single znearPlane , System.Single zfarPlane )
The projection transform is used to describe the viewing
frustum of the scene. You can think of the viewing
frustum as a pyramid with the top of it cut off; the inside of the
pyramid is what you can actually see inside your scene. The two parameters in
the preceding function, the near and far planes, describe the limits of this
pyramid, with the far plane making up the "base" of the pyramid structure and
the near plane where you would cut off the top.
With the camera transforms set up, you might be wondering why you add the code
to the OnDeviceReset method. The reason is that any time the device is
reset (for any reason, such as being lost), all device-specific state is lost.
This state includes the camera transformations. As a result, the
OnDeviceReset method (which is called after the device has been reset)
is the perfect place for the code.
You might also be wondering why you are using events, and why
they would work, because earlier in this chapter you disabled the event
handling. The event handling that you disabled was only the internal Direct3D
event handling. The events are still fired and are available for you to consume
if needed.
Remember from earlier, you can't rely on the event handlers to
automatically clean up your objects for you. You need to make sure you dispose
of these new items you've created when the application shuts down. It's
important to clean up these objects because failure to do so results in memory leaks, which can cause system instability and
poor performance. Although the garbage collector can (and does) handle much of
this memory management for you, doing this cleanup here will give you much finer
control over the resources the system is using. Add the following code to your
OnDestroyDevice method:
// Clean up the textures used for the mesh level if ((levelTextures != null) && (levelTextures.Length > 0)) { for(int i = 0; i < levelTextures.Length; i++) { levelTextures[i].Dispose(); } // Clean up the mesh level itself levelMesh.Dispose(); }
Comments