Shooter Prototype


If you publish Flash games in some capacity and would like to fund further development based on this prototype, don't hesistate to contact me at nathan_AT_icecreambreakfast.com.


INSTRUCTIONS

ASDW: Move
Arrow keys: Look around
Mouse: Click and drag the mouse to look around.
Tab: Toggle the map
Space: Shoot
KEYS 1-7: Change weapon
Mousewheel: Change weapon
Escape: Pause / Settings
'f': fullscreen, 'escape': leave fullscreen.

Note: In fullscreen mode, controls change. Use arrow keys to move/strafe and mouse to look around. This is due to Flash fullscreen limitations.
'm': mute/unmute sound (not in fullscreen)

GOAL: This is a graphic demo. You can explore and interact, but there is no goal.


Please wait for this to load - there's no progress bar. Click the flash app a few times to give it input focus.
There's no way to restart, so press F5 to refresh the page to play again.

Design Notes


Some Stats


   This is a 562k flash .swf file. I pared down its file size somewhat, but I didn't work too hard at that. Likewise, it uses something like 280Megs of RAM. That could probably be brought down by a factor of 2 or 3 - I didn't keep much of an eye on that.

First Person in Flash


   Considering making a first person engine with Flash's software renderer? A bit of good advice - don't. It's not a great idea, for painfully obvious reasons. But if you're a masochist, read on.
   So what is going on here, technically? I'll explain, but let's add a little context, because there is an argument here, not just technical details.


A Mini-Essay: First Person Rendering in Games



   Let me assert that, in the history of real time graphics on commodity hardware, a dividing line for games and 3D exists.

1. The Rise of Rasterized Triangles, Our Default


   It happened in the mid-to-late 90's. It was marked by GLQuake, the 3dfx version of Tomb Raider, the rise of 3D acceleration video cards, and the Nintendo 64 and, later, Dreamcast. Broadly, this transition froze into hardware and massively sped up a key software abstraction. The hardware handled the rasterization of perspective correct textured triangles, transformed from 3D space to a 2D viewport, with a depth culling mechanism, usually a z-buffer, and special fast 3D video card memory for textures, possibly slow to update. This is simplifying and leaving out other important features, of course. But at core, a certain powerful 3D graphics abstraction was frozen into silicon, and massive performance boosts at reasonable prices were the result. Along with this came standard APIs for communicating with that hardware, specifically OpenGL and Direct3D.
   Over time, these APIs and the physical hardware evolved, adding features and possibilities. Shader programmability, render targets, and multiple rendering pipeline stages were added, among other features, and high performance techniques like deferred shading were worked out too.
   But video cards as cheap hardware incredible at rasterizing perspective correct textured/shaded triangles with a z-buffer remains the bedrock foundation of real time graphics for games.
   It's a hugely successful abstraction. It fits well with the artist workflows we've evolved. It sufficiently represents most arbitrary 3D objects we want to draw, although it does struggle with certain objects like hair and grass. It supports arbitrary 3D camera rotation and arbitrary 3D animating object rotation. It works well with hardware realities like caches and pipelining... And we get ever better performance at reasonable prices. In short, why we've gone down this road for 3D graphics is pretty obvious.

2. Other Truly 3D Techniques


   So that's where we are and have been since that discontinuous break in the late 90's.
   3D in games prior to that, though, was an entirely different world.

   Some games were made entirely out of line art with no hidden surface remove, like Battlezone, Tempest, and Star Wars. They supported arbitrary 3D camera transformations, in theory, and arbitrary 3D object transformations in theory. We could call them the beginning of rasterized triangle's lineage.
   Also in that lineage were flat-shaded polygons games, like Hard Drivin', Virtua Racing, Virtua Fighter, F-15 Strike Eagle II, Star Fox (mostly)and S.T.U.N. Runner, or even more complexly rendered untextured triangles, like X-Wing.

3. Other Techniques: Early Racing Games


   But developers evolved other interesting techniques too, which fell away by the late 90's.
   These techniques existed because they took advantage of specific hardware features and because of properties of the specific gamplay environments they drew. They were not general case solutions to 3D graphics.
   One important lineage came from racing games. They relied on special line scrolling raster effects for world / track drawing and used scaled sprites for cars, trees, billboards, and other objects. This was particularly clever given how slow the target machines were. But it was very constrained, with no looking up or down and even any real camera rotation. This technique features in Pole Position, Outrun, and Enduro Racer. Similar techniques in actions games feature in 3D World Runner and Space Harrier. The key observation is how constrained and special case the world / track rendering and camera were. That made the technique work. For more information, this website is a great resource.
   F-Zero, Super Mario Kart, and Pilot Wings feature a closely related approach. The world consists of a single, flat, perspective-applied textured plane, with scaled sprites overlayed as cars, bushes, and obstacles. This technique features camera rotation in one axis, but the world was notably flat, a step back from the line scrolling hills. Like those other games, this technique only supported specific limited kinds of levels.
   Comanche:Maximum Overkill and Magic Carpet had their own, similar approach. These games featured voxel engines for landscape drawing and overlayed camera facing sprites and some very simple polygonal objects for in world actors. This supported even more camera rotation and translation and was a pretty convincing effect. Levels and spaces still had very specific constraints, though.

4. Other Techniques: Grid Based RPGs


   Role playing games had their own lineage here. These games in question all had first person views, were either turn-based or, later, step-based realtime, and involved steering a party of characters around a world. Movement snapped to a grid and player cameras to 90 degree viewing angles. Camera's couldn't look up or down. Artwork was predrawn as tiles, with various viewing angles and perspective accounted for. Early examples included Wizardry on the Apple 2, and the Bard's Tale on the Commodore 64, and later examples included Eye of the Beholder, Dungeon Master, and Lands of Lore. These later games overlayed animating, scaling sprites in the world for objects. Levels were constrained to being grids with no elevation changes.

5. Other Techniques: Ray Casters


   Another crucial lineage emerged in the early-90's. It focused on real-time framerates, texture-mapped walls, cameras with 360 degree rotation, arbitrary player positions, and relatively complex indoor environments. No preceding techniques supported that combination of features. Achieving this involved specialized data structures and specific level constraints. This was the era of ray casting. Levels often had no eleveation changes or slopes or walls with non-level edges, though the specific features depended on the engine. In world objects were handled as scaled sprites. Good examples include Catacomb 3D, Wolfenstein 3D, Rise of the Triad, and Ultima Underworld. More recent, similar titles include Wolfenstein RPG, Doom RPG, and Orcs & Elves.

6. Other Techniques: Doom Era


   Developers naturally evolved ray casting into more complex forms for more powerful machines. These engine still had special case, constrained levels, but the levels included far more features. Engines added lighting, walls at arbitrary angles, slopes, changing elevation, hacked floor-over-floor, raising and lowering platforms, and more. Still, level geometry continued to be very constrained. Objects were still scaled sprites. Walls were effectively constrained texture mapped quadrilaterals by this point. Examples include Doom, System Shock, Duke Nukem 3D, Outlaws, Dark Forces, and Daggerfall. This lineage, once it made the jump to true 3D primitives, was the basis of the character action and FPS games we see today.

7. Other Techniques: Alpha Sprites as the Base 3D Primitive


   One more interesting lineage deserves recognition. Sega in arcades spearheaded it, in the late-80s to mid-90s. Its core was using hundreds or thousands of screen-oriented transparent sprites, with scaling and possibly rotation, to draw the entire world. Those sprites worked as primitives, with larger structures constructed from them. Either software or hardware must've had a way to sort those objects back to front. Examples include Power Drift, Afterburner 2, the behind the back sections of Thunder Blade, and, especially, Rail Chase, Jurassic Park, and Galaxy Force 2.

8. An Observation about These Techniques


   Hardware accelerating graphics cards essentially killed these other techniques in the late 90's. Thus, the last time we saw these ideas in action, 1) they were at low resolutions like 320x244, 2) they were in limited color modes like 8-bit palletized color or finnicky palletized 14-bit color 3) they used only 1-bit alpha, so pixels were fully transparent or fully opaque, 4) any scaling or rotation suffered from terrible aliasing becase of the hardware or algorithms use, which lacked any smoothing or bilinear filtering, 5) real time filtering like Gaussian blurs certainly weren't an option, 6) machines executing these techniques had no more than a 66 Mhz processor, often less, and 7) those machines also had no more than 8 Megs of RAM, and often much, much less. These are generalizations, of course. Hence, this raises a question. Might some of these techniques be more interesting than we remember them, particularly given 8-bit alpha, 24-bit color, higher resolutions, some filtering, and mind-boggling amounts of RAM?

9. Flash in this Context


   Flash 9 with Actionscript 3 has been an ideal technology for exploring that question. Flash in the browser has access to huge amounts of RAM on host machines, 8-bit alpha, 24-bit color, a good quality 2D renderer that is slow for real time drawing but useful for clever caching, access to useful filters, higher resolutions, and so on. Explore updated versions of these older techniques suits Flash hand in glove.
   Especially relevant, Flash until recently hasn't allowed access to hardware accelerators. Its software render blits but has no support for rastering textured triangles.
   Reviving one of these old techniques for new hardware is exactly what my prototype shows here. Specifically, my approach is an updated version of the technique in Galaxy Force 2 and other sprites-as-primitive games. Maybe someday I'll get to explore using modern hardware acceleration and higher resolution source art with this technique; it's been great fun to work with.

One Last Observation on Rendering Approaches


   Coming from the land of hyper-constrained resources that was game development in the 80's and 90's, throwing around the amount of RAM I am to get the results shown here can seem... well, borderline immoral.
   But the truth is, if a player is playing a high quality Flash game, and it's occupying their attention entirely, right now there's actually a reasonable chance they have that amount of RAM sitting around going unused anyway. It goes against old instincts, but it is true.
   And that leads me to a broader point. The history of game technology has been peculiar in that almost all constraints have improved all at the same time. The Atari VCS had no RAM, a terrible processor, no hard drive, tiny ROMs, and no modem / ethernet connection. It was bad at everything all at once. A PC in the early 90's might've had and 80 Meg harddrive, 4 Megs of RAM, a 33 Mhz processor, no meaningful graphics hardware accelerator, and a slow modem. All of those traits have been boosted hugely for a modern machine.
   One consequence of this particular evolution of technology is that we haven't (outside of places like the demoscene) seen aggressive work done on exploring techniques for maximizing pretty asymmetrical constraints in games. We never really had an era where people were making games that assumed the computational power of a modern GPU but only 64k of RAM. We never really had an era where game makers were working with the computational power and graphics constraints of a Commodore 64, but had gigabit ethernet connections streaming massive amounts of data in real time.
   Unusually, widely deployed browser-based Flash games really did represent such an era in many ways, as I've detailed above. It's possible, as mobile devices acquire more and more RAM (as fo 2013, plenty ship with 2 Gigs of RAM) but battery life remains a crucial constraint, that rendering techniques and art styles that heavily prioritize caching and blitting over more complex per frame computations might have some very important use cases. A similar case code be made for laptop batteries, too. But I haven't actually done any real tests on this topic.