The Function of software

A computer is one part electronic circuitry and mechanical components connected together. Software provides the digital instructions to these various components on how to operate and function together to complete a task or objective, and what the sequence and duration of the various functions shall be. Software is a program, which in turn is a series of instructions.

The microprocessor of the computer only responds to on / off electrical charges that represent simple instructions in binary notation code patterns. These “1” and “0’s” are known as machine code. Machine code adds up to address locations of data and simple operating instructions (add, subtract, etc.). Machine Language produces this code and is the first software level above the actual circuitry and processing hardware itself. Line after line of machine code are decoded by the microprocessor and its various sub-systems to further decode and execute instructions, access various memory locations for data and then manipulate the data to produce a desired end result.

However, rather than write code in binary machine language, the author of a program will use an alphanumeric language which is easier for humans to read in order to produce a source code that can be translated into machine code by either an interpreter application or a compiler application. At a basic level, these languages correspond with pre-determined binary patterns, and are designed for the particular construction of a specific microprocessor. Most recently, the higher languages are compatible with a CPU platform / architecture. Some languages are structured to either be Interpreted into machine language or Compiled into machine language (some programming languages can be both).

An Interpreter works with alphanumeric programming languages that are a level above machine language. This program works through a line of the programming language and the programming language is simultaneously interpreted / translated into object code or executable machine code and then is immediately executed (just the one line), by-passing the requirement that the entire programming language be converted to machine code prior to execution. The process is less efficient due to, as indicated, it usually is interpreted and executed at a line at a time and if it repeats a line then it must interpret that same line again. The Interpreter is beneficial for debugging a program as it will display exactly where a bug (especially syntax errors) is as the line will not execute and the program will cease running.

A Compiler can also translate alphanumeric programming language source code to machine language. The difference with and Interpreter is that the programming language is all translated to machine code (compiled) by the Compiler program prior to execution. Then all lines are executed sequentially rather than having an alternating line-by-line translation and execution similar to an Interpreter. A complied programming language source code is also alphanumeric and can be written in abstract ideas rather than specifying actual parts of the microprocessor. The compiled program language instruction file at this level is independent of the specific microprocessor. Rather, it is a universal instruction file understood by various different programmers. It is the Compiler Program (not the programming language) that needs to be specifically tailored to the individual microprocessor architecture. The end result of the Compiler program still tends to be slightly less efficient as the programming language itself needs to be generalized to be flexible enough to solve different types of programming objectives.

Assembly Language, one of the original low level programming languages, is an alphanumeric code that is converted into machine language/code by an Assembler program. There is normally an intervening step when the lines of code (the character strings), again known as the source code, are converted by the Assembler to an executable or object code (again, machine code). These languages side step the need for a compiler and are prevalent within the computer’s processors and card (circuit boards).

Most programs contain a Kernel, which is the main program. This main program is written to call up additional sub-programs, routines and sub-routines which expand the capabilities of the main program. For instance, in a digital audio application, the Kernel, sub-programs and routines are all loaded into RAM when the application is opened. If you wish to search your directory for a saved file, the kernel calls up the necessary programs to respond to the key strokes, search the directory, import a file in a certain format, open dialog boxes, adjust the graphic representation on the screen, communicate with the driver of the hard drive and soundcard, load the file, and then play the file.

Each programming language has its own unique syntax. However, there are two overall approaches in actually designing a program to obtain the best result possible: Structured Programming and Object Oriented Programming (OOP). The structured programming approach is that the program should be written in an exact, sequential order, which in turn will result in a clear, concise and accurate program. Object Oriented Programming encourages reuse of code within the program.

The original low-level programming languages consisted of short statements, quickly compiled to machine code and did not require much computer resources to run. Now, languages are ANSI-compliant (Amjerican National Standards Institute), which specifies a standard set of commands for a language recognized by the organization.

Writing a program is about creating a code that will receive and accept inputs, process the inputs and produce a desired output. The programming language is used to write unambiguous statements (YES / NO) that will process the data input as it moves through the program. The programming language is used to create that code, in many ways the two are synonomous. However, in approaching the actual coding one must first define what the output should or will be and what are the logical steps to process the data input to obtain that output.

An Editor (either a Line Editor or a Full Screen Editor) is an application used to type the lines of text of the programming language (code), edit the lines of code and save a copy of the program to the hard drive. A line editor accomplishes exactly as it is named: it allows one to either write or edit one specific line of code at a time. A full screen editor allows one to move the length of width of the code within the ediing area with the use of the cursor keys or by mousw click, and also includes drop-down menus for basic functions (open file, save, copy, paste, print). An editor can also have a compiler or interpreter application incoporated into it, thus one can immediately compile the source code then run the program and observe the output for any error.

A programming language contains some basic, fundmental common items (constructs): sequence (a series of instructions uninterrupted by a decision or looping), decision (two possible answers and resultant code sequence, exclusive of the other possible sequence, based on the either possible answer), branching (based on a decision or a GOTO) and looping (repetition of sequence code prefaced with a decision).

The higher languages that are popular with developers today are Visual Basic, C, C++, C#, Visual C++ (C and C++ compiler), Visual J++ and Java. These are languages used to write stand-alone program source code. This programming source code must be stepped down to machine language either by a one step compiler or in a two step interpreter.

Essential to writing programs is API (Application Program Interface). API are pre-written routines that can be used as reusable templates in writing a program to operate with a specific operating system or another application. Every operating system provides APIs to programmers: Unix (POSIX), Linux, MS Windows (Win32 and MFC) and Apple Mac (Classic, Carbon, Cocoa). An API saves the programmer time by allowing them to inset an API call into their application and precluding the need of having to rewrite the specific routine. APIs also provide a consistency to applications: this is why so many MS Windows-based applications have similar looking dialog boxes and drop-down menus. Similarly, DirectSound is a component of the DirectX API that allows audio related applications to work with many types of sound cards. The audio related application is written with DirectX API calls to the appropriate library file (.dll file) which are in turn compatible with the hardware, thus the programmer does not need to be familiar with the sound card, just with DirectX API (as the API either directly translates line commands into machine code or transmits data to the specific driver for the hardware). As the Creative Labs SoundBlaster sound card is so prevalent as the factory installed sound card by several computer manufacturers, there is also an EAX extension to the DirectSound API which allows programmers to write applications to take advantage of the attributes of that sound card.

Recent web authoring scripting languages that are embedded / included in HTML documents (and execute within the client-side browser) should not be confused as programming languages (although they can perform some sophisticated functions). These include DHTML (which combines CSS / Cascading Style Sheets and JavaScript), Java Applets (written in Java), JavaScript (which is an exception), VBScript (for executing Visual Basic ActiveX applications), XHTML (which combines HTML with XML syntax), XSL (XML style sheets), XSLT and SMIL. Server side scripting languages include CGI scripts (written in Perl, AppleScript, C++, Visual Basic), ASP (Active Server Pages), ColdFusion, Java servlets (written in Java), JSP (Java Server Pages), PHP and Perl, all of which can be used to query and/or update a database on the server.

Programming and scripting language development also continues to steadily move in the direction of enterprise computing, which is the server and client relationship that allows a desktop computer to run applications but access data and additional resources from a network. As the Internet is a network and has become exceedingly more important for business, presentation, information and communication, one focus of programming languages has become Web Services, which is the requirement to make applications and data operate seamlessly across desktops, local networks, mobile hardware and the internet. In addition, Web services coding should be less complex than C or C++ and should also have components that are reusable. Web programming languages / web application development tools for Web Services include PHP, SOAP, UDDI, WSDL (of which all three are cornerstones for integrating applications and share data across the internet), ASP.NET, ADO.NET, VC++.NET, VB.NET, PERL, Microsoft.NET (the Eiffel programming language is integrated in Microsoft.NET), IBM WebSphere Studio.

XML (Extensible Markup Language) is a scripting language that is used to describe the static content encased in HTML tags (or eliminate HTML and model data just with XML) and make the content available over networks for processing by any software application. XML can be used to model audio data. Unfortunately, various versions of XML are developing which affects its strength and purpose of being a unifying language. XML is being modified by various disciplines (scientific, economic, manufacturing, etc.). For instance, there are several XML-based business related standards being developed: BPML (Business Process Markup Language) developed by BEA Systems, CSC, SAP and Sun Microsystems which allows a company to model and define every function in its entire business process. Thus, any company that utilizes BPML now has data that is interchangeable with another company regardless if both companies are engaged in entirely different products and services. Similarly, there is BPEL (Business Process Execution Lanaguage) developed by BEA, IBM and Microsoft. BPEL is actually based on the combination of Microsoft’s XLANG (Extensible Language) and IBM’s WSFL (Web Services Flow Language), which had been recent and developing scripting languages, thus one can see how quickly the standard is changing. The whole idea behind XML-based language is similar to having an open standard: write an application once and have it run on any system operation.


The sensory scope of virtual reality systems

The sensory scope of virtual reality systems is determined by how many of the human senses are engaged.  The number may be weighted by whether the senses included are “high bandwidth” or “low bandwidth” in nature. Vision, hearing and touch have a higher capacity for rapid, complex transmission and thus can be viewed as high bandwidth senses for communication between humans and computers.  Thus it is not surprising that these three senses have dominated virtual reality systems.  In comparison, the senses of taste and smell are relatively low bandwidth senses and few virtual reality systems engage them.  The sensory scale of virtual reality systems is the degree of sensory bandwidth that is engaged by communication between humans and computers.  This includes both the size of the signal relative to total human perception and the realism of that signal.
Vision is the single most important human sense and three-dimensional depth perception is central to vision.  Thus, three-dimensional perception is critical for immersive virtual reality.  Human eyes convert light into electrochemical signals that are transmitted and processed through a series of increasingly complex neural cells.   Some cells detect basic object and image components such as edges, color, and movement.  Higher-level cells combine these image components and make macro-level interpretations about what is being seen.   Cues that humans use for three-dimensional perception are based on this processing system and can be categorized into three general areas: interaction among objects; the geometry of object edges; and the texture and shading of object surfaces.
Many cues for three-dimensional perception come from interaction among objects.  Key attributes of these interactions are overlap, scale, and parallax.    Objects that overlap on top of other objects are perceived as closer.  Objects believed to be similar in actual size but appearing larger are perceived as closer and objects that grow in apparent size are perceived as moving closer.  Objects that move a greater distance relative to other objects when the viewer’s head moves are perceived as closer.
Parallax vision (or stereoscopic vision) comes from the fact that human eyes see real world objects from two different angles.  Eye muscles and neural processing of the human brain work together to combine these two different images into perception a single image with three dimensions.   Muscles in each eye change the shape of each the lens to focus at the distance of the object viewed.  Other muscles change the orientations of the eyes so that lines of vision from the two eyes intersect at that same distance.  In real world vision, these two muscle functions work in harmony.   In virtual reality, they may conflict. When images are displayed very far away, then the size of the screen required for immersion is prohibitively large and it is difficult to present different images to the eyes.  When images are displayed very close to the eyes extremely high image resolution is required and the two muscle functions of the eyes tend to conflict.
One method to have the eyes see different images on a distant screen is to have eyes view the screen with different polarized filters.  This is how “3D glasses” work in movies.   The interaction of the polarized filters with colors or other attributes of the image on the screen shifts the images, causing different perspectives and depth perception.  However, this method has significant limitations.
Another method to present the eyes with different images is to use “shutter glasses.”  Shutter glasses alternatively block the image from first one eye and then the other, in synchronization with images from two different perspectives shown successively on a single screen.  When the alternating images are shown in sufficiently rapid succession, then the brain combines the two images into a single three-dimensional image.  Most Head Mounted Displays (HMDs) used in virtual reality are some type of helmet that includes: some version of shutter glasses; a relatively close high-resolution screen with an image that spans more than 60 degrees of the field of vision and moves with head motion; and a mechanical, optical, magnetic or other mechanism to track head motion.
An object’s edges separate it from the environment.  The geometry of these edges also provides perceptual cues about its three-dimensionality. The outer edges of an object form its outline and are the bridge between interaction among objects (including overlap, scale, and parallax as discussed above) and the internal orientation of the object. An object’s inner edges bridge the outer boundaries of the object and its inner surfaces and textures.  Together, the outer and inner edges of an object provide powerful cues about its three-dimensional size, location, orientation, and movement.
Early three-dimensional graphics used the basic geometry of object edges, generally combinations of straight lines, to create moving three-dimensional, transparent “wire” figures.   Although three-dimensional graphics are now much more sophisticated, the underlying geometry of object edges remains central to three-dimensional rendering.
An object’s surfaces are in the spaces within its edges. In addition to the interaction among objects and the geometry of object edges discussed above, the texture and lighting of an object’s surfaces also provide important cues for three-dimensional perception.  One of the most important aspects of three-dimensional perception of surfaces is how they interact with light.  Humans are accustomed to viewing objects illuminated from above by the sun and thus most readily interpret the three-dimensionality of objects lit from above by a single light source.  Nonetheless, illumination from multiple light sources or from directions other than above can also convey three-dimensionality if done consistently.

“Texture mapping” is an efficient method to create surfaces for three-dimensional virtual objects by overlaying basically two-dimensional texture gradients on object surfaces.   Depth perception of these surfaces can be then be refined through the use of shading and reflected light.  “Ray tracing” takes light reflection to a high level by tracking individual rays of light as they reflect among objects and ultimately bounce from object surfaces to the viewer.  Texture mapping, light shading, and ray tracing are computationally intensive, particularly for complex virtual environments with moving objects.  Fortunately for the sake of computational economy, humans do not track as much vision detail in moving objects as in stationary objects.  Thus, computational effort in virtual reality can be conserved without significant loss in perceptual realism by rendering the surfaces of moving objects in less detail than the surfaces of stationery objects.
The essence of virtual reality is fooling the human body into perceiving things that are not real.  From this perspective, it is not surprising that the human body can respond negatively, particularly when it receives conflicting signals from different senses and is not entirely fooled.  With respect to vision, one problem with current VR imaging systems is conflict between eye focus (adjusting the lens of each eye at the apparent distance of the object viewed) and eye axial convergence (coordinating the orientation of both eyes to intersect lines of sight at the apparent distance of the object).   This problem is more acute for HMD systems in which images are displayed relatively close to the eyes.
Another problem is latency (a lag) between the kinetic motion signals that the brain receives from the semicircular canals of the inner ear and the visual motion signals that the brain receives from the eyes.  When there is a lag in visual image processing, then the body receives signals of motion from kinetic senses in real-time but signals of motion from vision after the lag.
Eye focus conflict and virtual image latency can cause eye strain, disorientation, nausea and even long-run health problems.  These symptoms are called “Simulation Adaptation Syndrome” or SAS.  Females tend to experience greater SAS than males.
People can adapt to virtual reality to some extent.  Also, SAS is generally less severe when people are exposed to immersive virtual reality gradually through a series of sessions.  The sessions start out only a couple minutes long and then gradually increase in duration, with real world intermissions between sessions.
With current technology it is difficult to avoid these problems.  However, these problems may eventually be greatly reduced by evolving technologies such as: external imaging systems with variable distance imaging (such as domes with multiple layers of translucent screens), holographic imaging (with three-dimensional images projected in mid-air), or direct internal body imaging (projecting images directly onto the retinas or direct neural-coded transmission from a computer to the optic nerve or neural centers in the brain).

What’s Virlink?

Virlink is a new site focusing on access to distributed Virtual Reality (VR) applications through the internet. Humans interact with computers in various ways.  Even just viewing a computer screen and typing on a keyboard is one such interaction.  However, human-computer interactions only qualify as virtual reality when they are immersive, interactive and intelligent in nature.  Virtual reality is an immersive computer-simulated environment within which humans interact with computer-generated objects in a manner governed by sufficient computer intelligence that the interaction with porno italiano seems realistic.
A virtual reality environment must engage key human senses with enough accuracy to give participating humans a sense of being in a real setting.  With the limits and norms of current technology, this generally entails image displays that span a large portion of the human field of vision with reasonable clarity, high-fidelity three-dimensional sound, and human-computer interaction based on head and hand position, motion, and configuration that updates more than fifty times per second.  More comprehensive haptic interaction that engages movement of the rest of the body and engages senses other than sight, hearing, and touch are generally above today’s minimum criteria for virtual reality. These advanced functions may, however, become standard for virtual reality in the future.
To qualify as virtual reality, objects within the computer-simulated environment should also conform with reasonable accuracy to the physical and biological laws that apply to their real counterparts.  This is necessary for the computer-simulated elements to appear real to the higher-order functions of the human brain, not just lower-level perception.  It is not sufficient for a cube to just look like a cube, it must also behave like a cube with respect to the conservation of matter, gravity, momentum, and other physical laws. This becomes more ambitious with more complex physical or even biological elements within a computer-generated environment.  Simulating an organism is more difficult than simulating a cube.
There are three categories of virtual reality based on the relative mix of computer-generated vs. real-world elements:
Category #1: Pure Virtual Reality – is an immersive virtual environment composed entirely of computer-simulated elements with the exception of the participating humans.  The apparent form of the participating human may even be transformed in nature.  In the purest form of category #1 virtual reality, the participating human interacts only with the computer, not with other humans or real elements.
Category #2: Mixed Virtual Reality, sometimes abbreviated as “Mixed Reality” or called “Augmented Reality, is either: a real-world environment with substantive superimposed and interactive virtual elements; or a computer-simulated environment with substantive superimposed and interactive real elements other than the participating humans.   Mixed reality environments may include only a few virtual elements, but these elements should be realistic and cognitively significant for the human participant.
As examples of category #2 virtual reality, fighter pilots can view computer-generated maps superimposed on the skyspace or ground.  Also, surgeons can perform surgery with computerized medical images of interior body structures superimposed on the patient’s body.  Other mixed reality applications may be primarily virtual with few real elements.  For example, a computer screen can display (and allow rudimentary control from) the motion of a human hand via an instrumented glove.   Mixed reality environments must have proper alignment of the real and virtual elements and also rapid responses to avoid dysfunctional temporal lags and spatial gaps.  Large-scale mixed reality environments also demand long-range trackers within large spaces or sophisticated multi-directional treadmills to give participants the illusion of long-range movement.
Category #3. Telepresence Virtual Reality – is the application of virtual reality technology to enable humans to be in one real-world location and yet function as if they were in a second, remote real-world location.  Telepresence differs from pure or mixed reality in that the virtual reality may be transparent to the participating human.  Virtual reality becomes a means, not an end.  It serves as a way to “be” in another location without traveling there.   The participating human only consciously interacts with real world environments. Category #3 virtual reality is useful for teleconferencing, telemedicine, virtual vacations, virtual home tours, and exploration of hazardous environments (underwater, space exploration, etc.).
The strictest definition of virtual reality includes only Category #1 applications.  The broadest definition includes Category #2 and Category #3 applications as well.

Quinn Dunki interview

G5: When Pandemic started Full Spectrum Warrior (FSW), how important was the AI to the project?
These responses are given by Quinn Dunki – the AI programmer on Full Spectrum Warrior.

We always considered AI to be very important to the project. The nature of the game is such that the player’s own units are AI agents carrying out requested commands. That divorcing of direct control from the characters requires that their AI be top notch in order for the game experience to be fun and satisfying. AI bugs would be potentially very frustrating, since the player is relying on the AI agents to do the right thing with minimal input.

G5: How much of the AI in FSW is scripted, and how much of it is dynamic??
The enemy units are entirely scripted. We wanted the game play to be tightly controlled so that the Designers could present a specific experience to the player at each moment in the game. The best way to do that is to have carefully scripted enemies with minimal autonomy. The player’s soldiers, on the other hand, exhibit a lot of dynamic decision making and self sufficiency. They have no scripting at all. Because they are placed in dangerous situations, and the player is relying on them to protect themselves, they need to act independently much of the time. However, they have to balance that with carrying out player orders in a timely fashion.

G5: There are quite a large number of scenarios in FSW that the AI would need to account for. Could you explain how some of the dynamic enemy AI works??
The enemy units are completely scripted. We found this was the best way to present a precise tactical series of encounters. They do exhibit some minor dynamic behaviours through the use of hierarchical state machines and branching scripts within their given encounter situation. We experimented with more dynamic and autonomous enemies, but the game play just didn’t feel right.

G5: The friendly AI in FSW is fantastic — you never need to worry about your squad doing something stupid. Could you explain how some of the friendly squad-based AI works??
I could go on for hours about the US Soldier AI, as it was without a doubt the biggest AI challenge on the project. Their decision making is based on a set of layered state machines. They take stimuli from various sources in the environment, and select their states based on a voting architecture. The members of the squad are always aware of each other, and they perform prediction of each other’s motions and states in order to coordinate activities. They also communicate directly at a low level to keep from stepping on each other, and getting in each other’s lines of fire. We use a combination of top-down authoritative control (for things like Bounding Overwatch) and bottom-up agent cooperation (for things like self preservation). The path finding system is based on a marked-up cell map from which path data is generated. There are static and dynamic cell layers to handle static and moving objects as efficiently as possible. The path system is also tightly linked to the cover mechanism, and the game rules. In order to eliminate foot sliding, all paths are computed completely ahead of time, so that animations can be precisely pre-calculated and strung together. This required some complex multi-agent planning and prediction systems for movement, since no dynamic avoidance mechanisms such as repellers could be used. The animation sequence for following a path is completely atomic once it is started. This was a challenging approach, but it really helped make our character motion look as smooth and natural as possible.

G5: Any amusing stories you can tell us that occurred during the development of the FSW AI??
The funniest parts of the AI development were certainly the bugs. The AI bugs were rarely dull. They invariably involved soldiers mowing each other down in hails of friendly fire, running each over, hurling obscenities at the wrong moments, and so forth. Bugs also frequently involved animated being strung together incorrectly, which tended to create spontaneous synchronized dance numbers. The next FSW game ought to have a Broadway Musical mode.


Publisher: Ubisoft
Developer: Cyan Worlds
Genre: Adventure / Puzzle
ESRB: Everyone

Cyan Worlds, the developer of the classic Myst, experimented last year with a significant deviation from this successful series by releasing Uru: Ages Beyond Myst. This title featured a third-person perspective (along with the traditional first-person view) with full 3D movement, a different kind of story, required running and jumping, and the promise – never fulfilled – of massive multiplayer online play. Although this risky set of innovations generated mixed reactions, the company has persisted and now released Uru: The Path of the Shell, an expansion pack incorporating an earlier free add-on named To D’Ni. The Path of the Shell apparently consists of ages which might have been eventually integrated into the online version. Can this release cement the value of this new approach?

Uru: The Path of the Shell provides you with even more depth concerning the tale of the ancient civilization of the D’Ni – the people who escaped to Earth eons ago, built an underground civilization, and created the ages and the books that transport you among them. Later, after the D’Ni civilization dissolved, Yeesha (the daughter of Atrus and a direct descendant of the D’Ni) strives to set things straight. Long afterwards, the D’ni Restoration Council tried to return the civilization to its former glory. The plot in The Path of the Shell focuses specifically on spiritual rather than physical restoration of the D’ni, with the power of the gods at stake.

As in Uru: Ages Beyond Myst, a hub exists within the Relto Age that allows you to link the other ages. Although the action in this expansion pack has you spend time in five ages, including the original Myst Island, you end up being present the vast majority of your time in just two new ages: Ahnonay and Er’cana. Ahnonay is a watery world with other forms such as an outer space variant, and Er’cana is an arid industrial world replete with complex machines. Both continue and extend the spirit of the original offering. Once again, although the environments presented are expansive, you usually move around on specified paths from which you cannot deviate very much.

Uru: The Path of the Shell contains fewer discrete puzzles than I am used to in the Myst series, as each of the two ages in which you spend most of your time have just a few rather lengthy challenges. Most of the puzzles rely on logic and careful observation of your surroundings to succeed, with plenty of matching drawings and symbols, pushing buttons, and pulling levers. Often you find pieces of paper with numbers on them, referring to a passage you need to read from one of the linking books on Relko. In a couple of instances, however, the pace slows down to a crawl as you end up having just to sit around doing nothing for over ten minutes for solutions to emerge.


Publisher: Microsoft
Developer: Adrenium
Genre: Action
ESRB Rating: Teen

Often games are dismissed as mild entertainment, a placebo for the mind and an escapist pastime that accomplishes nothing. To those critics, I would most happily point out that constant negative reinforcement may very well save our entire world. Take, for example, the lesson so often learned and most recently reiterated in the Microsoft-published, Adrenium-developed Azurik: Rise of Perathia. This lesson reads as such: never, under any circumstance whatsoever, even if it seems like a good idea at the time, should you place the elemental balance of your world into a collection of easily stolen artifacts. Obviously, this is an important maxim, and some would argue that the fact that our culture has hit the point where we can support video games technologically is a direct result of our world’s strict no elemental artifact policy.

Azurik: Rise of Perathia, however, let that cat out of the bag and you’ll get a chance to see the troubles this sort of thing can cause in Azurik: Rise of Perathia for the Xbox. According to the back story, six elemental guardians were forged by a mysterious race of god-like beings known as the ancients. These guardians happily monitored the world from their own elemental realms and worked together to maintain balance and integrity. As such things do, though, the center could not hold and as the ancients’ culture descended into war, the elemental guardians eventually rose up to destroy their masters. In a last ditch effort to save their creation, the ancients unleashed a powerful spell that bound the guardians to elemental disks and forced them again into protecting the realm of Perathia. With this act, the ancients’ power was finally exhausted, and their traditions have become the object of study by a group of warrior monks called the Lore Guardians.

Azurik, our hero, is the youngest of these guardians and a protégé of the current guild leader. Standing between these two is Balthazar, a Lore Guardian who makes his masters nervous with his desire for power and his overly honed martial skills. After a brief scuffle between Balthazar and Azurik, Balthazar discovers the Death disk, which it turns out has been missing for more than five-hundred years; thus allowing the Guardian of Death to run unchecked. With a distinct sense of preordained and planned events, the Death Guardian appears to Balthazar and offers him the power he craves in return for bringing all the disks to the Death realm. Obviously, there’s a battle as Balthazar is caught stealing the disks and the titanic magics shatter the disks and fling them to the outer realms. With this, your quest as Azurik begins by hunting the lost fragments in a desperate effort to restore the balance of your world.

Azurik: Rise of Perathia ScreenshotAs Azurik, you will have access to several key skills, the most useful of which is the simple ability to move. Running, jumping, rolling clear of enemy attacks and so forth proves to be a vital skill, as the elemental realms’ magical natures mean that things aren’t always constructed according to standard procedure. Azurik can also move quite adeptly through water, swimming deep and easily through the waves without worrying too much about air. Of course, even though Balthazar was the unequalled master of weapons, our hero is no slouch either, and he is able to wield a bladed staff with considerable skill. The basic attack for this staff is a simple thrust: Quick and efficient, this jab can be chained into attacks, and the final swing of each four-part combo will usually be enough to stun your opponent. When surrounded, though, it’s often more advisable to proceed straight to the slash moves, which whirl the staff around you, usually hitting anyone unfortunate enough to be in its path.

Research in the Digital Garbage Dump

I’m going to start by stating the obvious: for a researcher – and most others as well – it’s better to read a book than hear about it, it’s better to see a film than read about it, and it’s better to play a video game than merely watch it. Sometimes you don’t have a choice, but if you do, you go to the original. This, of course, doesn’t necessarily make your conclusions any more correct, but I do claim that the basis for them will be truer to the medium and the item you are studying. This is the obvious bit. With regards to my own doctorate project, it clashes rather badly with reality. In the following, I’ll describe the problem, show you a solution, and then talk about a major new problem which arises.
I’ll be relating this to my own project, so I’ll just take a quick little detour to outline it: My background is film science and software engineering. I will be using these two angles when looking at how video games construct their stories. I’m focusing on what I think is a neglected period; the pre-CD-ROM games. I start with the first video game, “Tennis” from 1958, and look at aesthetics and interactive developments up to about 1985. By doing this, I hope to unlock some of the storytelling secrets of the medium which today is a multi-billion cultural industry.
Doing historical research on a medium rooted in technology has some inherent problems. In comparison; studying old texts or books might be difficult, but that is because the language and/or textual symbols are unknown. As long as you are in possession of the text, you at least know where to start. With more technology-intensive media, electronic ones in particular, the problems are different. Recordings of early radio broadcasts very rarely exist, as is also the case for the first years of television. For film, the problem isn’t as clear-cut, as a good portion of the early films still exist. Yet, film provides a good example of what I’ll eventually come to:
When you are shown an old film, your viewing environment is obviously nowhere near its contemporary viewers’. But that’s not the only difference. During the 100 years of cinema, there have been many technological “standards” for the recording and screening of films. When new ones were introduced, not many bothered to keep the equipment for the old ones. For the introduction of color or sound, this doesn’t matter much, as silent or black and white movies easily can be projected on newer equipment. Changes that did create problems, however, were changes in the projection frame rate and the aspect ratio of the image. The speed change is the reason that all of the old films we see today seem jerky and comically speeded up: we no longer have projectors capable of the original lower frame rate. Similarly, only a few cinemas are still able to project films made in the “old” formats of 1:1.33 and 1:1.75 properly. The result is the same as when screening non-1:1.33 films on TV – a part of the picture is simply cut away. This problem is exaggerated when moving from the opto-mechanical medium of film, to the electronic one of videogames.
Videogames are, historically, a by-product of the computer industry. We all know how fast it moves. I don’t want to get into a lengthy technical discussion, but I feel that it will be helpful to outline some basic concepts. Those of you who are familiar with computer architecture can just doze off for a fem minutes…
Basically, a computer system consists of two building blocks; a collection of electrical and mechanical components, “The Computer”, and a set of operating instructions, “The Program”. One can very well exist without the other, but they don’t actually do very much on their own. These are the two components involved in any operation on any computer, whether it’s a word processing task, like Word, or a storage facility for data, like dBase – or a videogame. The problem is that any program can not control any computer, they have to match; programs have to be specifically written for a give computer system. For example, you can not run programs written for an IBM/Windows machine on a Macintosh, or vice versa. Likewise, programs have to be written with certain computer configurations taken into account, even within a single computer family. For a word processor user this is a monetary nuisance, an update of the program might force a computer update as well, but for one doing research on old programs – video games – it’s a disaster. To put it plainly: old tech isn’t just old tech, it’s obsolete tech. And obsolete machines are no longer interesting, they get thrown away. The result is that your task doubles. Not only do you have to track down a copy of the program in question, you must also find an operational system to run it on. A film researcher who discovers an old reel will, unless it’s damaged, of course, have few, if any problems in viewing it. A program for a non-existent computer system is virtually useless.
Muddying the waters even more, is the issue of program storage. For example: not only have the physical size of the most common data carrier, the floppy disk, changed several times, but the way data – programs – is read from or written to it is also system specific. Try inserting a Macintosh floppy into a Windows system for a quick demonstration. The problem of course multiplies over time, finding a reader for punch hole cards, the floppy’s older sister, is actually quite difficult.
These are the problems that face one who wants to look at old computer software. I admit that I have painted a rather bleak picture, if you look long and hard enough, you will always find someone who has kept the old PDP or Altair machine in the garage, but those are the exceptions that have to thoroughly searched for.
Ironically, the fast-moving technology that creates the problem, also provides a solution. The above description of a computer system was slightly simplified, and I have to be a little more specific to make my point: When I say that the system consists of physical components and a program, hard- and software, I have to add: several layers of software. In other words, the computer runs a program to run a program. This layering of software goes all the way to the center of the hardware, and even inside it, each lowest-level instruction is carried out by executing even lower-level “micro-code”. Given this, any computer should then, in principle, be able to run any program, all you need to do is to reprogram one of the levels. There are several reasons why this doesn’t quite work, the most important one is that the software line has to be drawn somewhere, so to speak, a computer can’t be all soft, it has to be a little hard too. Some of the software levels have to be hardwired and unchangeable, otherwise the machine simply won’t be buildable. In addition, hard-wired software executes faster than non-wired, especially when it’s put in dedicated components hotrodded for speed.
However, given the leaps in computer technology we see almost on a monthly basis, there is a way around this. Since today’s computer is much faster than the one from last week, it is possible to replicate and then reprogram the old one’s hard-wired software levels in the present one’s soft domain. This is called Emulating the older system.
What happens is, in effect, that the new system pretends to be the old one, so that when the old program runs, it sees what it expects to see. For example, when the program tries to access a hard-wired graphics processor, the new host computer intercepts this and gives the program the appropriate signals. In addition, it also performs the tasks the original unit was supposed to, so that the final result for the end-user is identical to the original system.
This is the ideal emulation situation, in real life it’s somewhere between this and something not resembling the original at all. This is because the problems inherent in software trying to mimic hardware. But, even with such limitations, there are several advantages to emulators, especially for research purposes.
The alternative to an emulator (and the original system), is to rewrite the code for the new system. This is not only extremely time-consuming (and in practice impossible for anyone but those in possession of the original source code), but also a problem from a scientific point of view. Some game companies have done this with their old games, and released them for for example the Windows and the Playstation systems. These new games will always be new games, not old ones. A horrid example of rewritten games, is a company who released some classic arcade games for Windows, and in the process managed to debug the original code, that is, removing some of the programming errors of the original game, errors that enabled experience players to earn extra lives by doing certain things at certain times. Clearly, the game is no longer the same. Please note that I’m not saying that the old one is better than the new, I’m merely pointing out that they are different. For historical research purposes, the old one is the interesting one.
Emulated programs do not have these problems, simply because they are not a rewritten version of the original, they are the original. What is new, is an added layer of software-emulated hardware. While playing emulated Space Invaders, if you hit the UFO with shot #27, you do get 5000 points, just like in the arcades in 1978.
Emulators are for me great tools, the one I have been using the most is also the best known; the Multi Arcade Machine Emulator, MAME. At present MAME emulates approximately 1400 different arcade video games. Arcade games are in some respects different from games run on both computers and dedicated games consoles. Arcade games are not made to execute on generic hardware, like a computer, but on dedicated chips, and often with custom made cabinets and controls. What MAME does, is that it emulates the hardware each game needs, and uses the computer keyboard (or mouse, joystick, gamepad, or whatever you might have connected) as a controller. The original game code is hardwired in chips, which are put in the cabinets. MAME takes as its code input “images” of these chips, so-called ROM-images. This is where the new problems start.
Arcade machines are not meant for home use, but many enthusiasts buy games when they no longer appeal to the ordinary gamers. These enthusiasts have, via the Internet, formed a loose-knit network where they help each other with the machines; if one of the ROM chips in, say, a Pac-Man machine go bad, someone might supply the unlucky gamer with an image of the chip, so that he or she can burn a new chip, thus making the game playable again. This is perfectly legal. When you have purchased a game, in practice these often consist of a motherboard and a corresponding set of chips, you are entitled to using it, which includes making your own chips to make it work. With MAME you no longer need access to the hardware – in effect, if you get hold of a game’s ROM images, you can play the game for free. The original copyright holders do not look upon this favorably.
A year ago, the Internet was brimming with arcade ROMs. Today, you really have to know how and where to look to find any. What happened was that the big games corporations decided to put their legal (and financial) weight behind their demands to rid the net of the ROMs. One of the most popular (and best) web-sites for both emulators and ROMs, Dave’s Classics, was overnight forced to shut down completely, with a threat of monstrous fines looming over it. A while later, the site reappeared, but with the ROM sections removed. The same has happened all over the web, as I said, trying to find ROMs today isn’t easy.
What’s my point? I have two: the first is that I have ended up with a moral dilemma. As a researcher of historical videogames, in addition to analyzing them as cultural objects, my main concern is to get hold of and also play as many of them as possible. In this, MAME is an invaluable tool. This is a strawberries-with-chocolate situation; MAME makes playing the games possible, and the existence of the emulator encourages even more ROMs to be put on the net. For me, the situation is ideal. On the other hand, what I’m doing is in principle illegal, and can be considered theft.
My second point is the serious one. The reason the game companies have any legal leverage, is that they are big and rich. The balance between overlooking and prosecuting copyright violations is fragile. When it happens, it is severe. The clampdown on the ROM sites came when it did, simply because they got too popular. A company like Namco might tolerate 10.000 people playing MAME Pac-Man, but when they see 100.000 doing it, they are thinking; “Hey, we could be making money on this!” That will be the death of emulation, “retro-gaming” and game-research as we know it. Which company will be able to resist the temptation to re-engineer the games and remove things like unwanted racism or sexism? This will also in effect stop all the bootleg versions of the games, like the french bootleg of Pac-Man where the dots are replaced with hearts, not to mention all the official territorial versions, where the games are tailored to suit different audiences. It is illegal to sell and play these games outside of their designated areas.
I don’t want to sound like some sort of doomsday prophet, but the end result will be that a portion of our recent cultural history will be lost. It does seem that some cultural artifacts, at least those that are digitally copyable and spreadable, are in the hands of the enthusiasts. I can with a great degree of certainty say that I will continue as a criminal in the future.