Homepage of Andreas Korat

My Ideas, My Work, My Life

Massively Multiplayer Augmented Reality Games

A thesis that aims at researching how Augmented Reality can be implemented on mobile devices

Table of Contents

List of tables. 5

List of figures. 6

List of abbreviations. 8

Kurzfassung. 9

Abstract 10

1        Introduction.. 11

1.1       Augmented Reality. 12

1.1.1       Location Based Augmented Reality. 13

1.1.2       Vision Based Augmented Reality. 15

1.2       Augmented Reality Games. 16       The Eye of Judgment 17       EyePet 18       Invizimals. 18

1.3       Massively Multiplayer Online Games. 19

1.4       Massively Multiplayer Mobile Games. 20       Professional Game Development Engines for Android. 22

1.5       Relevance of Android for Augmented Reality Mobile Development 23

1.6       Discussion.. 28

2        Graphics Library Evaluation.. 29

2.1       Evaluation Criteria. 29

2.2       Library Evaluation.. 35

2.2.1       JPCT-AE.. 36

2.2.2       JMonkeyEngine. 38

2.2.3       Ardor3D.. 39

2.2.4       libgdx. 40

2.2.5       Evaluation results. 42

2.3       Discussion.. 43

3        Game Prototype. 46

3.1       Application Requirements. 46

3.2       Game Basics. 48

3.3       The game concept 53

3.3.1       General Information.. 53

3.3.2       Game States. 54       Login game state. 55       Spawn Game State. 56       Main Game State. 56       Inventory Game State. 57       Binoculars Game State. 58       Group Management and Messaging Game State. 59

3.4       Game Administration and Area Handling. 59

4        Implementation.. 61

4.1       Device alignment in the virtual scene and the camera display. 61

4.2       Data exchange between mobile devices and servers. 67

4.3       WebSockets. 69

4.4       Consistency Management in Networked Real-Time Applications. 71

4.5       Discussion.. 76

5        Conclusion.. 80

Literature. 81


List of tables

Table 1 - evaluation criteria. 35

Table 2 - JPCT-AE evaluation.. 37

Table 3 - JME3 evaluation.. 39

Table 4 - Ardor3D evaluation.. 40

Table 5 - libgdx evaluation.. 42

Table 6 - Ranked final evaluation results. 42


List of figures

Figure 1 – Layar Browser UI [3] 14

Figure 2 - Junaio Browser UI [4] 14

Figure 3 - SmartAR digital water on a table [5] 16

Figure 4 – Eye of Judgment board [10] 17

Figure 5 - PSP with attached camera running Invizimals [11] 18

Figure 6 - MMORPG Subscriptions [14] 19

Figure 7 - O&C Online [68] 21

Figure 8 - Mobile OS Market Share in 2015 according to Gartner 24

Figure 9 - Android Tablet market share 2011 / 2015 [32] 25

Figure 10 - Mobile OS market share in enterprises in 2010 [34] 26

Figure 11 - Tiobe Programming Community Index [35] 27

Figure 12 - Object highlighting in the prototype application.. 32

Figure 13 - Models used in the prototype application.. 45

Figure 14 – Areas of Interest and Creatures in Admin UI 48

Figure 15 - CSAs containing several AOIs. 50

Figure 14 - Demonstration of view restrictions by using AOIs. 50

Figure 17 - Quest Administration Interface. 54

Figure 18 - Diagram of all game states. 55

Figure 19 - Login Game Screen.. 55

Figure 20 - Spawn Game Screen.. 56

Figure 21 - Main Game Screen.. 57

Figure 22 - Inventory Game Screen.. 58

Figure 23 - Binoculars Game Screen.. 58

Figure 24 - Group Manamgement Game Screen.. 59

Figure 25 - Selected CSA with two adjacent CSAs around Graz. 60

Figure 26 - Coordinate System of the Device Orientation matrix [52] 62

Figure 27 - Cooridnate System of JPCT-AE [53] 62

Figure 28 - Mast triangulation between three radio towers [58] 64

Figure 29 - TSS Terminology. 75


List of abbreviations


Artificial Intelligence


Augmented Reality


Central Account Server


Central Server Area


Differential Global Positioning System


Frames per Second


Global Positioning System




Java Perspective Correct Texturemapping


Location Based Augmented Reality


Massively Multiplayer Mobile Game


Massively Multiplayer Online Game


Massively Multiplayer Online Role Playing Game


Operating System


Point of Interest


PlayStation Portable


Player versus Monster


Player versus Player


Real-Time Application


Real-time kinematics


Samsung Galaxy S i9000


Secure Sockets Layer


Trailing State Synchronization


Time Warp Synchronization


Vision Based Augmented Reality


Wizards of the Coast


World of Warcraft


Augmented Reality stellt eine der disruptivsten Technologien unserer Zeit dar. Durch sie können neue Möglichkeiten geschaffen werden um Informationen zu betrachten und mit ihnen zu interagieren. Derzeit wird die Technologie jedoch noch nicht in vollem Maß ausgenutzt.

Um dies zu gewährleisten und die Technologie für ein großes Publikum verfügbar zu machen kann Augmented Reality für die Steuerung und Visualisierung von Spielen Videospielen dienen. Diese waren immer schon ein wichtiger Faktor zur Verbesserung von Hardware- und Sofwarekomponenten sowie zur Verbesserung bestehender Interaktionsmöglichkeiten mit digitalen Inhalten.

Derzeit existieren jedoch keine professionellen Spiele die auf Augmented Reality basieren und aufwendige Spielmechanismen wie Massively Multiplayer integriert haben. Deshalb wurde diese Arbeit mit dem Ziel geschrieben zu beweisen, dass es mit dem heutigen Stand der Technik bereits möglich ist, Echtzeit Massively Multiplayer Spiele auf Basis von Augmented Reality zu entwickeln und auch für eine Viezahl von gleichzeitigen Benutzern skalierbar zu halten. Hierfür wird zusätzlich der Prototyp eines kooperativen Echtzeit-Massive-Multiplayer Spiels für mobile Endgeräte entwickelt.

Die Arbeit ist in fünf Kapitel gegliedert: Zunächst wird erläutert, weshalb die Entwicklung von Augmented Reality basierten Anwendungen aus betriebswirtschaftlicher Sicht lohnen kann. Weiters werden technische Aspekte werden eine Evaluierung von Grafikbibliotheken genauer beschrieben. Im dritten Kapitel wird der entwickelte Prototyp detailliert beschrieben und die jeweiligen Sichten von Benutzer und Anwendungsadministrator werden hervorgehoben. Im vierten Kapitel werden weitere bedeutende Aspekte für die Implementierung genau dargelegt, und mit dem fünften Kapitel werden abschließend die Resultate besprochen.


Augmented Reality is one of the top ten disruptive technologies at the present time. The technology promises to deliver new ways of consuming and interacting with any kind of information. While this technology still does not seem to be used at its full potential, video games are considered to deliver augmented reality to a large audience. Video games have always been a major driving factor in improving present hardware and software solutions alike by delivering better graphics and new ways of user interaction. What is more, video games are a great business factor world-wide and can connect a large number of people no matter where they are from. This connection and interaction between people becomes more common with current trends to use web applications like Facebook or Google+.

As there are no examples of Augmented Reality based games that deliver features that are common in video games, like large-scaled multiplayer modes, the thesis is written with the intention of proving that real-time massively multiplayer Augmented Reality games are possible to create at the present time. Therefore a prototype of a real-time massively cooperative multiplayer game for mobile devices is developed in addition to writing this thesis.

The thesis is structured into five chapters: Firstly, It is highlighted why the development of Augmented Reality based applications on mobile devices is financially feasible. Secondly, technical aspects with a focus on graphics display on mobile devices are laid out. Thirdly, the prototype application is explained in detail. Here, the idea behind creating the prototype is pointed out and both the view of the user and the administrative implementation is explained. Fourthly, important aspects of the actual implementation are structured and all encountered problems are clarified. Lastly, the thesis provides a short conclusion about the results.

1      Introduction

This thesis is written with the intention of proving that real-time massively multiplayer Location Based Augmented Reality (LBAR) games can be developed for currently widely disseminated mobile devices. To proof this idea, a focus is laid on both displaying and interacting with location based information in 2D or 3D as well as synchronizing states and local information with other application participants. In addition, the thesis deals with the business point of view of building large-scaled AR applications.

A prototype of a real-time massively cooperative multiplayer game is developed in addition to writing this thesis to prove that complex AR applications can be developed and used at the current time. A Real-time massively multiplayer game is chosen because games of this genre usually have high demands on both user interactions and on graphical representation capabilities. What is more, massively multiplayer online games have a great business potential, which most likely forms the basis of successfully adopting new technologies to the market.

As developing such an application is rather complex and time-consuming, libraries that facilitate the development are chosen for as many tasks as possible. Therefore, only libraries that are available for free are used. This way, it is assured that the outcome can be reproduced without making large investments.

The thesis is structured in five main chapters: Firstly, it is explained why creating applications like the prototype might be financially feasible to companies. The business relevance of AR and Android centered mobile application development is highlighted with examples and figures. Here a focus is laid on mobile game development. Secondly, the thesis compares freely available graphics engines on Android and highlights requirements and technical problems when developing AR applications on mobile devices. Thirdly, the prototype application is explained in detail. Here, the idea behind creating the prototype is pointed out and both the view of the user and the administrative implementation is explained. Fourthly, important aspects of the actual implementation are structured and all encountered problems are clarified. This chapter is mainly focused on engineering and programming tasks. Whenever possible, solutions for encountered problems are given. Lastly, the thesis provides a short conclusion about the results.

Furthermore, the thesis’ starting point is that Augmented Reality (AR) presents a promising technology for displaying information in a natural and intuitive way to the end user on mobile devices. In this respect, AR stands for a new kind of information visualization that enables the user to interact with information in a simple way. Moreover, the principle of connecting data to specific geographical regions can be easily integrated into AR applications as they aim at augmenting the current world around the device.

1.1    Augmented Reality

AR enables a user to see the reality and supplements it with digital information. According to Azuma’s definition of AR this digital information should be interactive and implemented in 3D [1]. Although AR technically could also include senses like smell, haptic or aural for example, this thesis focuses on AR restricted to vision and localization [2].

The majority of current augmented reality applications display an image of the real world which is usually retrieved from a camera. This camera image is blended with a 3D scene similar to scenes of computer games or to 3D simulation and visualization software like 3DS Max[1] or Blender[2], or at least a 2d scene that is mapped to the 3D environment. The user commonly interacts directly with objects in these virtual scenes or through an on screen user interface. What objects are displayed on the screen depends on the orientation and position of the camera in the real world as well as on the implementation of the AR application. Examples for this are layer-based AR browsers or image-tracking applications like 3illiards[3]. The following subchapters outline two of the most frequently used implementations of commercially available AR applications.

1.1.1    Location Based Augmented Reality

In LBAR applications the positions of displayed objects are determined by the current geographical location of the user and the orientation of the device that runs the application. These applications generally link geographical coordinates to the digital objects and determine their position in the virtual 3D scene by comparing them to the Global Positioning System (GPS) coordinates of the user’s device.

Today LBAR applications are mostly developed for modern smart phones and Tablets as these devices usually include

·        A camera for displaying the real world,

·        GPS for location retrieval,

·        Magnetic field, acceleration and gyroscope sensors for calculating the device orientation, as well as

·        High performing central processing units and graphics processing units for 2D / 3D graphics visualization.

In recent years several LBAR development platforms were established. Particularly noteworthy are layer-based platforms like Layar[4] and Junaio[5] which are both available for Android based devices and the iPhone. The main purpose of these applications is displaying any kind of location based information on top of the camera image. These layer-based AR browsers let developers create their own layers (aka channels in Junaio) of content that can be browsed by the user – similar to websites – by using the platform specific browser applications. This way developers and content creators can publish related information – like restaurant guides, or site seeing tips – in their individual layers. Figure 1 presents two ways of using the Layar browser to find interesting locations and Figure 2 shows the UI of the Junaio browser. Besides these aforementioned closed but freely usable platforms, Open-Source AR browsers like mixare[6] are currently available to display location based data depending on the phone’s location and orientation.


Figure 1 – Layar Browser UI [3]

Figure 2 - Junaio Browser UI [4]

The actual implementation and design of a LBAR application highly depends on the device it runs on. In the past most of these devices were self made or only prototypes that did not make it into the mass consumer market. Nowadays, Smartphones represent the most straightforward devices to implement and run these applications. As GPS signals usually cannot be retrieved in buildings or underground, applications that have to work in such locations need to implement some other mechanics to determine the device’s location. Otherwise, LBAR applications could only be used outdoors. In this scenario, lighting and contrast of the camera image can only be controlled to a low degree. This subsequently constitutes a severe handicap to digital information visualization as blending a scene on top of the camera image possibly further reduces brightness and contrast of both the scene and the images [2]. In addition to continuous changes in lighting conditions AR might not be best suited to be used in outdoor applications. To partially prevent these problems, the AR browser Junaio allows the developer to mix LBAR, which is called Junaio GLUE[7], and Vision Based Augmented Reality through its channels (VBAR).

1.1.2    Vision Based Augmented Reality

In contrast to LBAR, VBAR calculates the 3D scene’s view by scanning the device screen for predefined images. These images are commonly called patterns. Once one of these patterns is recognized in the camera image, the application calculates the orientation and location of the camera relative to the pattern. This way, the screen position of the predefined image becomes the origin in the 3D coordinate systems of the scene.

Unlike LBAR systems, using VBAR applications merely requires a camera and any kind of computer that provides enough processing power for pattern recognition and, possibly, a graphics chip for 2D and 3D rendering.

Today several libraries for VBAR application development exist, ARToolkit[8] and NyARToolkit[9] to be named among others. These libraries are available for the programming languages Java[10] (desktop, Applet, Android), C#[11] (WinForms, Silverlight), ActionScript 3.0[12] (Flash), Objective-C[13] (IPhone) and for both C and C++[5].

In October 2010, Qualcomm released an SDK[14] for creating high performance VBAR applications for the Android platform that allows the use of complex images for pattern recognition. A Unity3D plug-in for creating VBAR Android applications using this SDK is provided as well [6].

Current image recognition software allows for high performance and high precision pattern tracking. Hence, advantages of VBAR systems over LBAR systems are higher accuracy when tracking the position of objects and fewer influences of environmental factors. Furthermore VBAR can be used in buildings while LBAR systems that only use GPS for position tracking cannot be used in these situations.

Advanced VBAR techniques like Sony’s SmartAR[15] even include the environment in the position and interaction calculation after a marker is tracked, as can be seen in Figure 3. This way the location in a room relative to a marker can be calculated even if the marker is not tracked any longer and real objects can be used to interact with the 3D scene. Examples for this are virtual balls that roll on a table and start plunging once they reach the edge of the table, or digital water that flows down a wall and recoils on real objects. Most usually VBAR applications do not force the user to walk too far as the space in which the application is used is limited to a short range around the patterns. Thus LBAR applications will most likely be favored over LBAR applications when they are used to implement short term games or any kind of stationary applications.


Figure 3 - SmartAR digital water on a table[7]

1.2    Augmented Reality Games

Although the idea of AR as a technology already exists for several years [8], implementations in commercial video games barely appeared until recently. Some games showed up with portable gaming consoles like PlayStation Portable[16] (PSP) and Nintendo 3DS. The later one even ships with six AR markers that can be used with preinstalled games like “Target-Shooting” and “Mii-Images” [9]. In contrast to the Nintendo 3DS that already ships with AR games, the majority of large commercial AR games were developed for Sony devices so far.

What is special about computer games is that they tend to push the barriers of hardware and software limitations and introduce new ways to fascinate consumers. Thus games are a major driver in making AR – and any other innovative technology – more popular and well known to the public and, hence, some of these aforementioned games are introduced next.

1.2.1    The Eye of Judgment

In 2007, “The Eye of Judgment”[17] was one of the first commercially available games that incorporate AR in its game design. Sony Computer Entertainment[18] developed this trading card game together with Wizards of the Coast[19] (WotC) for the PS3. The game is played on a square board that is divided into a 3x3 grid. Players need to capture these areas by using spells and creatures that are summoned using playing cards in a similar manner to other card playing games developed by WotC. The implementation of VBAR makes this game unique, though. By using EyeToy to track the playing cards, all battles during a game session are visualized in 3D. Figure 4 demonstrates the visualization of the game board on the PS3.

Figure 4 – Eye of Judgment board [10]

1.2.2    EyePet

EyePet[20] is another AR game. In this game, which is available for the PS3, the player is responsible for a little pet that is projected over a video stream of the real word using EyeToy and VBAR. Similar to Tamagotchi[21] the player has to feed the pet and he or she has to play with him. Most interactions with the pet happen by using pattern and image recognition as well as motion control. This way the user can play games, change the pet’s appearance or check its health status.

1.2.3    Invizimals

Invizimals presents another VBAR game available for the PSP. The game is based on the Pokemon[22] game principle. By using a PSP, players have to find creatures called Invizimals in the real world. Once users are near to an Invizimal, they can use a graphical pattern to track an Invizimal on the PSP display for capturing it. These creatures can either be collected and raised or traded with other players. Depending on environmental factors like the color of surfaces that are visible in the camera image or the time of the day, the application determines if an Invizimal is nearby. Figure 5 shows a PSP that blends two Invizimals over the camera display via VBAR.

Figure 5 - PSP with attached camera running Invizimals [11]

Finally a player can fight with his own Invizimals against other Invizimals that are played by artificial intelligences (AI) or other players. Therefore the pattern is always used to display the Invizimal, bet it for trading or in a battle. During a fight, the player can interact with the 3D scene via key input, shaking the device and blowing [12].

1.3    Massively Multiplayer Online Games

Massively Multiplayer Online Games (MMOGs) are games in which up to hundreds of thousands of players can interact with each other within the same virtual world. First MMOGS are dated back to the late 1970s. This genre started being popular with games like Multi-User Dungeon – a text-based role playing adventure game – and have become quite popular all over the world ever since [13].

Massively Multiplayer Online Role Playing Games (MMORPG) – which is probably the most popular type of MMOGs – rose in popularity since 1998, peeking at around 22 million subscribers in 2011 according to mmodata.net. Figure 6 outlines the increase in MMORPG subscriptions dated from 1997 to 2011.TotalSubs

Figure 6 - MMORPG Subscriptions [14]

Although text-based MMORPGs from the 1980s used to be free, current popular MMOGs generate large returns to the companies that develop and maintain them. While computer games are commonly charged once per copy of the game MMORPGs are usually charged on a monthly basis. Thus, building strong and loyal relationships with their customers is even more important for companies that sell MMOPGs than for other games of different genres.

Due to a permanent growth in the number of subscribers and a continual income through micro transaction based fees on a monthly basis, successful MMOGs present a source of a high economic value [15]. World of Warcraft (WoW) – the most popular MMORPG today – generated a net revenue of 395 million US Dollar between March 2010 and March 2011 [16].

What should still be considered, however, is that MMOGs oftentimes require central servers which are provided by the respective software development companies. While MMOGs generate a large return on investment, games of this genre require large investments in the first place as well as continuous investments in maintaining both the game software and the central administrative hardware. To be more precise, depending on the type of a MMOG, synchronizing game states to maintain a consistent world between game participants might present a non trivial problem. For example, maintaining a persistent state in a turn based game like Atlantica Online[23] is usually easier to implement than a state maintaining system in real time games like Lord of the Rings Online[24]. In the later game the order of actions depends on how fast a player reacts. Thus, MMOGs that allow the participants to execute actions concurrently need to implement some sort of state maintaining system such as “Adaptable Dead-Reckoning” [17].

1.4    Massively Multiplayer Mobile Games

According to a survey of mobile phone users in the US and UK 27% of mobile phone owners have a Smartphone and 21% have a web-enabled phone [18]. The number of games played on mobile phones increased dramatically since 2009 and Smartphones further reinforce this trend. Yet the majority of gamers played less than one hour per week in 2009 and these numbers did not change considerably. However, in contrast to the time games are played on mobile phones, net revenue generated by mobile phone games increased dramatically [19].

The adoption of Massively Multiplayer Mobile Games (MMMGs) might lengthen the time people play with games on mobile phones, though. MMMGs are MMOGs that run on mobile devices. Due to recent mobile hardware advancements, these emerging mobile multiplayer platforms are possibly going to be of high market relevance in the near future.

There are already several MMMGs available on the Android market at the time of writing. For example, Parallel Kingdom[25] – a location based massively multiplayer game in which players can fight monsters and claim territory in the real world – already attracts more than 500.000 people worldwide [20].

Order & Chaos Online[26] – a WoW like game running on the IPhone 3GS or later, on the IPad and on several Android based Smart Phones – generated revenue of more than one million US Dollar within 20 days after its official release [21]. Figure 7 shows a screenshot of Order & Chaos Online.

order_chaos Besides MMMGs that are developed to be played on mobile devices like Emross War[27] or Pirates of the Caribbean[28], ports from desktop MMOGs like Tibia[29] are currently available as well. Porting older games that do not need large screens to mobile devices might be financially tenable as the smart phone market could open up new possibilities for application development and application distribution.

1.5    Professional Game Development Engines for Android

Great effort has been undertaken to push game development on mobile devices recently. At the present time, several commercial game development tools and game engines exist that run on Android devices.

First of all, there exists an Android 3.x optimized version of the UnrealEngine[30], a widely used gaming engine that has been used to create a variety of commercial games, with games of the Unreal Tournament[31] and the Mass Effect[32] series being some of the most well known titles. Primarily, the engine was created for developing games for personal computers and video consoles. Now, the engine runs on Google Android, Microsoft Windows, Microsoft Xbox 360, Mac OS, Apple iOS, Sony Playstationx 3, Sony PlayStation Vita, and under Adobe Flash. While the engine was designed for creating games in the first place, it has already been used for creating films and television and simulation and training software. Further, some of the highlights of the supported features are advanced character animation, pre-configured artificial intelligence systems, game scripting, networking, in-game cinematics and a multi-threaded rendering system[22].

The Unigine[33] is an advanced real-time 3D engine for games, simulation, visualization, serious games and virtual reality systems that supports NVIDIA Tegra2[34] based Android devices. This dual-core chip includes an ultra-low power NVIDIA GeForce GPU and is often implemented in tablet computers. There are only but a few Smartphones, like the Samsung Galaxy R[35], that implement this processor, though. According to their website, the engine can be licensed on a per-case basis with an average deal being about 30.000 USD per project [23], and enables developers to create applications that run on Microsoft Windows, Linux, Mac OS, Sony Playstation 3, Apple iOS, and Google Android. The engine eases game development by providing a graphics engine for delivering photorealistic graphics, implement live physics, handles dynamic data loading for large scenes, supports scripting, and ships with high quality in-game graphical user interfaces [24].

Finally, Unity3D[36] is a game development tool that eases game development by providing libraries for high performance graphics rendering, lighting effects, terrain generation, support for Allegorithmic Substances[37], physics, audio, and networking. The tool generates applications that run on the unity’s own web player, Adobe Flash, Microsoft Windows, Microsoft Xbox 36, Mac OS, Apple iOS, Google Android, Nintendo Wii, and Sony PS 3. While the Unity3D base development kit is available for free, the Android plug-in has to be purchased for 280€ in the base package and for 1050€ in the pro package, respectively [25]. Furthermore, a plug-in for Unity3D is available that implements VBAR behavior directly into Android applications, letting the developer use high resolution images as AR markers. What is even more interesting, SonyEricsson[38] referred to articles about how to create games for their Xperia PLAY[39] Smartphone using Unity3D in their official blog[26]. These articles describe how to use the Xperia PLAY GamePad in Unity. Further, SonyEricsson released several white papers and articles about developing games for their Xperia PLAY devices and demonstrate how to use optimize the programming code [27].

1.6    Relevance of Android for Augmented Reality Mobile Development

According to [28] AR is one of the top ten disruptive technologies for 2008 to 2012. Although hand-held devices are limited in processing and memory capabilities, they have realistic opportunities and to be suitable for mass-market AR applications according to [2].

Thus, Android based devices seem to be a good choice for developing AR applications as they most certainly feature all hardware required to use AR applications like cameras, GPS, sensors and processing power. Furthermore, Android possibly becomes one of the most widely used mobile OS within the next years. With a market share of between 38.5% and 48% worldwide among Smartphones Android devices already deeply penetrated the smart phone market in 2011. What is more, Gartner predicts that Android market share will increase to 50% in 2015 [29; 30]. Figure 8 shows the market share of mobile OS in 2015 as predicted by Gartner:

Figure 8 - Mobile OS Market Share in 2015 according to Gartner

And, while Android is only dominant at the Smartphone market at the present time, the OS’s market share among tablet computers will increase to around 36% until 2015 according to [31]. On the one hand, tests accomplished with the Motorola XOOM[40], which has a 10.1 inch display and weights 730g, indicated that large tablets are not quite handy to run LBAR applications, so they might rather be used for stationary VBAR installations. On the other hand, as the XOOM is one of the heaviest tablet computers and there are smaller tablets available, such as the Sony Tablet P[41], or even semi tablets like the Samsung Galaxy Note[42], new Android based Tablets could likely be used to run LBAR applications in the future as well. Figure 9 outlines the Android market share for tablets.

Figure 9 - Android Tablet market share 2011 / 2015 [32]

So far in this thesis, AR has only been considered in the context of the mass consumer market. But, as AR is a technology that is intended to improve data visualization, it is relevant to enterprises as well. In October 2010, [33] showed that the Android market share continued to grow rapidly in the enterprise mobility market sector, although still being outclassed by Apple products. Figure 10 outlines the mobile OS market share according to Good Technology in 2010.

In the context of enterprises and enterprise application development, Android can likely be integrated well into current business solutions as Android builds on open standards and interfaces. The operating system (OS) kernel is based on LINUX and the source code of most Android versions, except all 3.x versions, is available for free. Thus, the code could be adopted to satisfy specific business requirements if needed.

Figure 10 - Mobile OS market share in enterprises in 2010 [34]

Further, when it comes to differentiate between Android and iOS from the software development perspective, the language in which the respective applications are developed presents an important factor. Due to the fact that Java is used as the main programming language on Android devices, a widely applied programming language is at every developer’s disposal. Android enables developers to reach a broad audience and Java is currently one of the most popular and best supported programming languages world-wide. Similar to C#, the Java programming language is already used in a variety of desktop-based and web-based business applications. Hence, the combination of Java and Android enables a multitude of hobbyist or professional software developers to create and distribute mobile applications at a large number alike, without forcing them to invest many resources in adapting the source code to different runtimes on end devices. What is more, the great popularity of Java among software developers results in a large reusable code-base which allows incorporating latest technology into applications. In this respect the popularity of Java is laid out by Figure 11.

Figure 11 - Tiobe Programming Community Index [35]

Aside of business applications, mobile game development could become more popular in the future. With games gaining a continually growing market share, the mobile game sector will experience the largest growth opportunity [36] in the near future. In the United States most revenue generated by mobile games can directly be assigned to Android and iOS games. According to [19] the total revenue generated by mobile games in US in 2011 is shared between Android & iOS (58%), Nintendo DS (36%), and Sony PSP (6%). This stands in contrast to only 19% of revenue of Android and iOS games in 2009.

Thus, with a rapid increase in Smartphone market share and a rising popularity in mobile games, starting with mobile AR game development could hold great opportunities at the moment [37]. As outlined in chapter 1.2 VBAR has become more popular for developing serious games. With increasing incorporation of hardware that is needed to use LBAR in smart phones, LBAR might become more widely used as well.

What is more, creating games that support networking probably becomes more common among game developers due to future increases of internet access through mobile phones, which might even overtake PCs as the most common web access device worldwide [38]. Nevertheless, the gaming market might present one of the weaknesses of the Android operating system in comparison with other mobile platforms. While several free to use libraries for Android game development exist, many of them are still not stable. A more in depth report of freely available Android game development frameworks is provided in chapter 2.

1.7    Discussion

Chapter 1 clearly highlights that the Android platform bears great potential to software development companies due to the number of current users and the ease of application distribution. This is true for software targeting business and the mass consumer market alike, and the potential is even growing in the near future according to predicted sales figures of Smartphone devices until 2015. While it is estimated that the android market currently exceeded to contain 300.000 different apps [39] there are near to none LBAR games to be found. While several VBAR applications have been uploaded to the market after Qualcomm released their VBAR SDK [40], only Parallel Kingdoms can remotely be considered as a LBAR game. This leads to the conclusion that most developers either do not think that AR in conjunction with location services improves the game experience overall or that they do not think that games using this combination of technologies is currently possible to implement in a user-friendly way. The latter argument is dealt with in detail in chapter 2 and chapter 4.

What is clearly in favor of considering building applications that run on Android is that there are several game development tools that allow creating a game once and then porting the final product to several different platforms out of the box. Thus, developers can build high quality games without limiting their product to be running on a single platform. Here is to mention that, while building exactly the same game for multiple platforms is not that problematic, incorporating AR into the mobile version of a game could possibly completely change the game.

2      Graphics Library Evaluation

While writing this thesis, it became clear that there are no official mobile frameworks that provide all functionality to develop a LBAR game out of the box on Android. However, most characteristic LBAR features like determining the current user location or the viewing angle of the device were already integrated in the OS. Thus, the biggest challenge was to seamlessly display content based on this data. Especially displaying 3D content forms an integral part of the game prototype. To prove that complex AR games can be built with freely available tools, at least one graphics library needs to be presented that supports basic functionality to display an AR application. Therefore, four graphics libraries are evaluated and compared. The best library is used for developing the prototype application. To accomplish this goal, a utility value analysis has been carried out, utilizing the criteria listed below. Every non mandatory criterion is rated with 0 (worst) to 10 (best) according to the degree to which the requirement is met. All ratings are then multiplied with a factor that reflects the importance of the criterion. All tests are dated to around first of July 2011.

2.1    Evaluation Criteria

To determine what graphics library is best suited for developing the prototype application, several evaluation criteria have to be declared. Some of these criteria are mandatory and some are optional. Further, optional criteria are weighted according to their importance for building a 3D AR application for mobile phones. These criteria are listed below, including an explanation of why they are declared.

Availability on Android

This diploma thesis focuses on the development of massively multiplayer AR games that are playable on devices using the Android OS. All tests have been carried out using the Samsung Galaxy S I9000 (SGS) and Android 2.2. As a consequence any evaluated library needs to support Android 2.2 or newer OS.

The SGS has been chosen as it is an above-average Android Smartphone. At the time of writing this thesis the SGS is still one of the more advanced Android-based devices available on the market. Now, it is even more affordable as its successor is already obtainable. What is more, the SGS is a very popular smart phone which has been sold over 5 million times until October 2010 and therefore can be seen as a serious benchmark for many Smartphones that are currently in use [41].

Java Based Libraries

Developers can either create applications in Java or in C / C++ using the Native Development Kit (NDK). The majority of application code is usually written in Java while only specific performance critical code could be written in C++ using the NDK. While some companies claim that writing native code with the NDK for specific tasks might result in up to 4 times increased performance when optimizing the application for their respective hardware [42], there is no guarantee that writing native code will result in any performance improvements [43]. In contrast, mixing Java and C++ code possibly increases the complexity of keeping the source code clean and easy to read. Thus, developing applications entirely using the programming language Java is declared as a precondition for evaluating a graphics library.

Location Based Augmented Reality Behavior

To develop an AR game, the chosen library must either already implement functionality to show the camera image as a background or at least allow the developer to add this functionality. Furthermore the view in the 3D scene needs to be alignable with the orientation of the mobile phone and 3D objects have to be placed correctly depending to the physical location of the Smartphone.

License model

In the process of writing this diploma thesis only libraries that are open source - or libraries that are at least available for free - are evaluated. Thus it is ensured that the libraries were adaptable if no AR functionality was provided at this time.

Loading of 3D models

It is mandatory that a graphics library provides functionality to import and display 3D models that are created with modeling tools like 3DS Max or Blender3D. Although no restriction to the actual file formats support are made in advance, libraries that could handle common and open formats like Collada[43] (DAE file format) are favored. This is reflected by the rating as a consequence.

Loading of animated 3D models

AR combines digital applications with the real world. Without using animated models, most applications would appear very unrealistic and somewhat strange, which stands in contrast to AR. Therefore importing and displaying animated models is a mandatory feature.

2D User Interface

User interfaces that overlay the camera image and the 3D scene can be used to show information to the user or provide interactive elements like buttons or text fields. Implementations of user interfaces by graphics libraries are helpful but not mandatory. If no easy way of displaying 2D data is provided by the library itself, a developer can still overlay a Canvas object provided by the Android platform to perform any kind of 2D drawing. There is a drawback when drawing on a Canvas surface, however. By using this Canvas, the developer possibly needs to synchronize the 2D drawing actions with other parts of the application or at least assures that the drawing to the canvas is performed in the same thread as the drawing to the 3d scene as well as the user input handling.

Post Processing

During the development of the prototype it became clear that by overlaying the camera image with 3D models, it sometimes might be difficult to determine the appearance of the model and its presentation on the device display. Especially when picking objects in a scene, the user possibly expects the selected object to be highlighted in some way. Therefore post processing or at least overlaying the model with a billboard that is textured with a glowing image could increase the usability of the application dramatically. As this possibly results in better usability and a better look of the game, post processing is considered to be a helpful feature but it is not mandatory for a library to be selected. Libraries that provide the use of shaders for post processing get highest ranks whereas libraries that ease to use of image overlaying and billboarding are still favored over those that do not implement either of these features.

Although the evaluation of shaders and their applications in the tested frameworks could not be tested, enhancing usability by highlighting selected objects in the game is implemented using billboards and can be seen in Figure 12.

Figure 12 - Object highlighting in the prototype application

Input handling

Input handling is an important topic as games most usually are highly interactive. Android ships with an implementation of accessing all kinds of events like touch events or key events. These events, however, are fired in a different thread than the main game thread and need to be synchronized. Furthermore, events fired in games oftentimes are more sophisticated than a user simply pressing the touch screen. Although user input is not directly related to the graphics presentation, real-time applications frameworks oftentimes include abstractions of user input management and thus, implementing user input mechanisms in the library would be considered to be a helpful feature.

Interaction with the Scene

While user input handling forms the basis of interactive applications, this feature alone is not sufficient to provide the kind of interactivity AR applications could deliver to the user. Therefore, the application should provide ways for directly interacting with the scene by selecting 3D objects that are displayed on the screen. This picking functionality should be implemented by the chosen library, but it is not mandatory as long as it can be implemented by a developer within a short period of time.


Good support is important to evaluate a library within a short period of time and to learn how to use it. Most essentially, any kind of online presence where someone can ask questions is appreciated. Besides forums and blogs, interactive demos in conjunction with providing its source code can help to understand and to use the library the way it is expected by its creators. Although this criterion is not marked as mandatory, support from the library’s creators or its community is considered to be necessary to evaluate the necessary functionality within the limited time of writing this diploma thesis.


Besides being able to display content correctly, a library needs to perform all drawing actions at around 30 times per second. Once the frame rate drops below 30 frames per second (FPS) the animations possibly do not look smooth and, considering AR applications, the camera direction might not be synchronized with the orientation of the mobile device. Especially the latter situation might cause problems as the orientation synchronization is fairly noticeably by the user due to frequent camera movements that happen frequently when someone carries a Smartphone.

To be more specific, the performance of a game can be divided into – at least – two parts: The rendering of the game scene and the update of the game state. Whereas rendering of the game scene deals with drawing the 3D objects to the OpenGL surface in the application, updating the game state is about mathematical and logical operations like calculating AI or physics, moving objects in the scene, calculating user input and so on.

In the most straightforward form, updates to the game state and drawing to the OpenGL surface can be done equally. This may lead to decreased application responsiveness when the rendering computations get heavy. Once the frame rate drops to about 20 FPS or lower, input handling or game logic update cannot respond as fast as the user might expect.

To prevent the game logic to suffer from heavy computational rendering, the logic and rendering can be separated. For example, a game development framework could call its update methods a fixed amount of times per second – like between 30 and 60 times – and call the draw methods as fast as possible by default. When the drawing operations exceed the hardware capabilities, the application might adopt its rendering updates as needed.

Frameworks that implement mechanisms for keeping the logic updates constant are slightly favored but this is not a mandatory criterion as the logic could be added manually and applications can be built without it, anyway.

However, the performance of the frameworks is the most crucial non mandatory criterion as improving the performance of a library might possibly be more difficult than – for example - writing custom input handling or custom object picking behavior.

Finally, all criteria and their impact on the evaluation are listed in Table 1:



Availability on Android


Java Based Libraries


Location Based AR Behavior


License model


Loading of (animated) 3D models


Built-in AR behavior


Open source license model


Diversity of accepted 3D model types


2D User Interface


Post Processing


Input handling


Interaction with the Scene






Table 1 - evaluation criteria

2.2    Library Evaluation

The former chapter outlined criteria that need to be checked for each evaluated library. This chapter lists the evaluated libraries including the final results of all evaluations and concludes with an explanation of what library was chosen. Further, only libraries that meet all mandatory criteria are listed.

To provide a useful comparison, a scene consisting of a light source and up to three instances of an animated model is set up with each tested graphics library. The actual number of the animated models may vary depending on the graphical capabilities of the respective library. Furthermore, all additional listed criteria are implemented as far as possible.

2.2.1    JPCT-AE

Java Perspective Correct Texturemapping (JPCT) AE[44] is an Android porting of the freely available JPCT graphics library for java based desktop and web applications. At the time of writing, JPCT-AE 1.23 was the current release and was evaluated.

Besides features such as loading models from 3DS[45], ObJ[46] and MD2[47] files, JPCT-AE supports collision detection, geometry based picking, vertex lighting and billboarding. Although not tested in the course of this master thesis, the library seams to support post processing on Android as well according to [44]. Finally, the library provides some helper functions to compute the position of 3D objects to screen coordinates and back. This is helpful for picking purposes as actually hitting a model on small devices proves to be quite difficult. Thus, when a user wants to pick something, all objects that can be picked are checked for their screen coordinates and the application can detect if the user touched somewhere near to a pickable object. Hence, a user does not really hit an object but only picks near to it.

Furthermore the library supports skeletal animation either using the BONES[48] library or the SkeletalAPI[49] library as well as keyframe animations. What makes this library really outstanding in comparison to other evaluated libraries is the relatively high performance of the animated models. By using MD2 keyframed models, the animated example model could be added and drawn up to 10 times in the scene without causing the application rendering to drop below 30 FPS. After loading and animating three models using the SGS and the BONES library, the application still maintains around 20 FPS. In addition, by using existing methods for drawing images on top of the scene and using a library Add-On to display text, a developer can build good looking user interfaces and still maintain a high frame rate after all [45].

What decreases the value of this library is the fact that at the time of writing, no support for custom input handling exists. Therefore, the developer needs to use and synchronize the native Android event handling system with the application. Furthermore, the library does not provide any functionality to align the virtual scene with the device orientation out of the box. Fortunately, some members of the community provide examples of how to implement this functionality [46].

In contrast to the aforementioned drawbacks, however, the library is well supported through a forum and the documentation is updated frequently. Unfortunately, both the desktop library and the android library do not ship with many examples. Table 2 concludes with a summary of the JPCT-AE evaluation:


Degree of Performance


Built-in AR behavior



Open source license model



Diversity of accepted 3D model types



2D User Interface



Post Processing



Input handling



Interaction with the Scene












Table 2 - JPCT-AE evaluation

2.2.2    JMonkeyEngine

The JMonkeyEngine[50] (JME) is an open-source community centered 3D game engine written in Java. Games that are created with JME can run as desktop applications, applets, or Android applications. Unlike JPCT, which is a graphics library, JME provides a full featured application structure that implements a unified input handling system, object picking, a mature scene graph API, a jBullet[51] physics engine implementation, an integrated graphical user interface (Nifty-GUI) and networking mechanisms, among others. Furthermore, the engine eases the use of shaders, lighting effects and special effects for post processing and 2D filter effects [47].

JME supports loading models from OBJ files and animated models from OgreMesh XML[52] and Collada files. These files can further be converted to a JME binary model format to reduce the file size and, possibly, improve loading performance. Using this binary format for 3D models subsequently is the only possibility to load 3D models on the Android platform at the moment.

At the time of writing, the JME team is working on the JME version three (JME3) which contains an Android port of the engine and some simple examples of how to use the engine on the Android platform already exist. Loading animated models, however, caused critical problems. It was not possible to load animated models from Collada files and importing models from OgreMesh files caused some rendering issues. As a consequence, displaying a single model caused partially flickering of the whole scene and the rendering performance decreased to around 18 FPS. Although the community support is all right, there was not enough time to solve these problems.

After all, the JME port to Android does not provide any functionality to align the scene with the device, but porting the code from the JPCT-AE AR demo application worked fine. Although JME provides a variety of features for desktop and applet applications, the Android port currently provides rather basic functionality and therefore cannot be used to develop the prototype application.

Table 3 concludes with a summary of the JME3 evaluation:


Degree of Performance


Built-in AR behavior



Open source license model



Diversity of accepted 3D model types



2D User Interface



Post Processing



Input handling



Interaction with the Scene












Table 3 - JME3 evaluation

2.2.3    Ardor3D

Ardor3D[53] is an open source 3D game engine written in Java. After first starting out as a fork project from JME, Ardor3D provides a high level scene graph API for developing high performance games and interactive graphics applications.

Ardor3D resembles to JME in respect to its features and the overall application structure. It ships with a custom input handling systems, a custom binary format for 3D models, an API for skeletal animation via Collada files, texture generation, collision detection, particle generation, effects, and a GUI system, among others.

Like JME, the Android port of Ardor3D might be considered to be in its earlier stages. Although the library works fine for rendering simple 3D scenes, the performance of animated models is not sufficient to create an extensive game. By rendering a single instance of the custom animated model, the application frame rate drops to around ten FPS on a SGS. What is more, there are no examples of how to port some of the eye-catching effects like bloom post processing from the desktop applications to Android. Although creating shader based effects might be available in Ardor version 0.7 there was not enough time to figure out how to implement them. Developing simple applications that only use static 3D models could work without problems, though. Finally, using the device sensors to align the virtual world with the device orientation was achievable via custom code.

Table 4 concludes with a summary of the Ardor3D evaluation:


Degree of Performance


Built-in AR behavior



Open source license model



Diversity of accepted 3D model types



2D User Interface



Post Processing



Input handling



Interaction with the Scene












Table 4 - Ardor3D evaluation

2.2.4    libgdx

libgdx[54] is an open source framework that lets the developer create 2D and 3D real time applications which can be deployed as desktop and as Android applications as well as applets using Java. Nearly the same source code can be created for both desktop and Android based devices. Hence, the main goal of the library is that with just some small code changes applications can be used on all platforms [48]. Unlike the former mentioned engines and frameworks, the libgdx project does not intent to abstract the development of graphics computational applications using scene graph APIs or the like but provides several helper classes to ease working with OpenGL ES 1.0, 1.1 and 2.0.

Therefore, the library ships with importers for 3D models delivered in the OBJ format and animated models delivered in the MD5 format and the OgreXML format as well as helper classes for using key frames or skeletal animations. Tests on the SGS showed that three instances of one animated model can be displayed at once at around 20 to 24 FPS. However, it was not possible to pick any of these models correctly in a demo application.

Apart from Helper classes for 3D application development, libgdx ships with classes for drawing bitmap fonts, sprite rendering, computing 2D particle systems, a library for CPU based bitmap manipulation, input and sound handling, and a 2D scene graph including a tweening framework.

If a developer has no former knowledge about OpenGL ES the usage of this framework is not as straightforward as that of other libraries, like JPCT-AE. However, the library ships with enough helper classes to fast and simply build games for the Android platform. No evaluation on shader implementation could be done due to time limitations, although the developers write in their project description that they can be used [48].

Just like the other evaluated frameworks no default implementation of aligning the virtual camera with the device orientation exists, but the application could be extended via custom code.

Table 5 concludes with a summary of the Ardor3D evaluation:


Degree of Performance


Built-in AR behavior



Open source license model



Diversity of accepted 3D model types



2D User Interface



Post Processing



Input handling



Interaction with the Scene












Table 5 - libgdx evaluation

2.2.5    Evaluation results

The final results of all evaluated libraries are ranked in Table 6:









Table 6 - Ranked final evaluation results

JME3 and Ardor3D are both extensive and well supported frameworks for building large scaled desktop and browser based games. Although community support for both libraries is great, their Android port is hardly documented and looks rather basic at present.

In contrast, JPCT-AE and libgdx follow a more basic approach of developing real time applications. JPCT-AE mainly presents a graphics library that does not forces the developer to follow any coding standard for building an application, whereas the focus of libgdx is laid on easing working with OpenGL ES by providing helper classes for common task when building real time applications.

As a result, JPCT-AE and libgdx seem to work better on the Android platform than the current ports of JME and Ardor3D in general. Although the former libraries are not as feature rich as the latter ones, most of their features work reliably on Android 2.2 and above. What is more, JPCT-AE and libgdx currently have an overall better performance in regards to displaying and animating 3D models.

Due to the way libgdx is designed, all of the provided code examples should work on Android, putting it ahead of the other libraries. After all, problems occurred when trying to pick animated models which could not be solved in time. In addition, working with libgdx could be more tedious and less productive than working with other evaluated libraries if the developer does not have extensive knowledge about OpenGL ES.

In contrast to libgdx, JPCT-AE provides simple ways of building, animating and manipulating 3D scenes without knowing anything about the Android interfaces to OpenGL ES. The overall performance seems to be sufficient for building the AR prototype and the library provides everything that is needed to create full featured real time graphics applications on the Android platform. Therefore, the prototype application is built using JPCT-AE.

2.3    Discussion

Although several free Java-based libraries are available for developing 3D real-time applications (RTA) on Android, many of them are still prototypes. Half of the libraries that were evaluated in the course of this master thesis did only implement basic functionality or even lacked some important features.

Furthermore, throughout comparing all previously listed frameworks it is noticeable that there is a lack of common formats for importing 3D models. Available formats across all libraries are MD2, MD5, 3DS, DAE, OgreXML and OBJ, whereas DAE was the best supported format for animated models. As most of these formats are either open or at least well supported this did not cause many problems, though.

In contrast, performance caused the most divergence among all libraries. Whereas models in desktop applications can have up to 10.000 polygons and sometimes even more [49], the Flatman model developed for the prototype application – which was used for all performance comparison examples – merely had a poly count of 280. Although a model that was used in 1998 for displaying a zombie in Half-Life had more than three times as many polygons, only JPCT-AE and libgdx could render it at more than 30 FPS on a modern Smartphone. Further, JPCT-AE was the only library that could render ten models at once while still maintaining a sufficient frame rate for a game to be playable. Fortunately, while it is impossible to render and animate high poly models from current computer games, there probably will not be a need to do this on mobile phones as the differences to low poly models might not be noticeable on small screens.

To further test the display capabilities of JPCT-AE, three different models were used in the prototype application. Two models were created for this purpose and one was downloaded from turbosquit[55]. The first one is the flatman model which has a poly count of 280 and does not use textures. This model was mainly created for performance comparison. The second model is the dino model which has a poly count of 1038 and uses a texture with a size of 512x512 pixels, downloaded from bildburg[56]. Finally, the swordsman model is a low poly model that is used in the open-source game Glest[57]. It has a poly count of 532 and is mapped with a 512x512 texture containing the images for the body, the sword and the shield. All states of the last two models were displayed and animated in an AR application by still maintaining 40 FPS. Therefore, it can be said with a measure of certainty that displaying and animating good quality low poly models is possible in 3D games created with JPCT-AE. All models are displayed in Figure 13:

In contrast to the performance downside of most libraries, cross-platform development can be seen as a huge advantage of creating Android based RTAs. Every evaluated library but libgdx is a port from a desktop game library. Thus, developing applications that run on Windows, Linux, Solaris, Mac OS X, on Android devices and in browser via applets is simplified. Further, Libgdx was even designed to make cross platform development as simple as possible, enabling the developer to write the same code for any device. As a result, a developer can first create and test applications on the desktop and then deploy them to Android by changing only a few lines of code. Unfortunately, all libraries but libgdx do not implement all of their features on Android yet. This means that either developers build simplified applications for both devices or just strip out advanced features in Android applications.

Figure 13 - Models used in the prototype application

After all, the previously mentioned Java-based libraries still need to be improved and fixed to be considered for developing professional games. None of the libraries scored even near to 100 percent in the framework evaluation, although the evaluation only targeted rather basic functionality for creating a prototype application. JPCT-AE seems to be best suited for being usable to create professional 3D games while libgdx still might be the first choice to either create 2D games or, at least, 3D games that do not make use of animated models. Although the main developer of libgdx uploaded a video in which several animated MD2 models were rendered at 30 FPS on a Motorola Milestone [50], the package for loading MD2 models, which is available as a library extension, did not seem to be runable using the latest version of libgdx, and therefore could not be tested and evaluated.

3      Game Prototype

A game prototype called AR Legends is developed as part of writing this thesis to prove that MMMGs using LBAR can be created and used on currently available above-average smart phones. Although the prototype is tested rudimentary on the Motorola XOOM tablet, running the application on Android 3.x+ devices is not mandatory.

3.1    Application Requirements

Several requirements need to be met to successfully demonstrate that the AR prototype application works as expected and that it can be used by non technical people. First and foremost, the application needs to run on most popular Android based Smartphones. Thus, no devices specific APIs or special hardware that is not implemented by all phones, such as Gyroscopes, must be required to run the prototype. While the application is mainly tested with a SGS and Android 2.3.3, all Phones that run at least Android 2.1 Update 1, and implement an Adreno 205[58] or PowerVR SGX540[59] GPU and a 1GHZ processor, should not be limited by processing power in any way. It cannot be guaranteed that less powerful devices can maintain a frame rate of 30FPS, though. The Adreno GPU, for example, is built into most of SonyEricsson’s Smartphones released in 2011, such as The Xperia ARC S and the Xperia PLAY. Furthermore, while the application is installed on a XOOM for testing purposes as well, the application is not intended to be used on tablets or any other device with an HD resolution or even more. Finally, although the prototype application is not intended to be distributed to any consumers, it should be designed so that it can easily be distributed via the default Android market without any further configuration steps. As a consequence, no root access to the system is required.

Another fundamental requirement is the capability to display 2D and 3D content in real time. All content that is part of the game needs to be visualized and interactive. While displaying 3D and 2D content forms the basis of the prototype application, including the device orientation and the GPS coordinates is required to build a LBAR application. Any further details on these topics are given in chapter 2.

As the prototype is a massively multiplayer game, network management and data synchronization across multiple local game sessions forms the most complex part of the prototype. The application is a location based real-time game and, thus, a lot of data needs to be synchronized almost in real-time between participants that are near to each other. Therefore, as data could be shared at any given time, opening and maintaining persistent network connections is required.

Basically, these network connections could either be formed by establishing ad-hoc connections directly between client devices or by maintaining persistent connections to central game servers. While ad-hoc connections are quite common for mobile games as interfaces like Bluetooth, WLan, NFC, or infrared are oftentimes implemented in mobile phones. At first, these ad-hoc connections appear to be the obvious choice as connections mainly need to be established when participants are near to each other. However, Centralized game architectures have other advantages. Information can always be sent to the server and, therefore, data can possible be exchanged between all participants at any given time. What is more, all data traffic can be regulated and filtered by servers that are controlled by the application developers. In the context of mobile multiplayer games, the major drawback of using a centralized game architecture is a higher delay. In the end, a combination of a centralized architecture and ad-hoc networks would possibly be the best choice, but for the sake of simplicity, a centralized game architecture is chosen solely.

Finally, the last requirement for the prototype to work is that there has to be an administration interface that can be used to place game specific content in the virtual scene easily. This functionality is not possible to integrate in the Smartphone application and, hence, the development of a web-based administration tool is required. Thus, the whole game scene can be administrated from any computer by using a modern browser. This administration tool should be as simple to use as the prototype application as it should be usable by non-technical people and people that were not involved in the development process or the prototype.

3.2    Game Basics

The player assumes the role of a hero who explores an augmented version of the reality by fighting creatures and finding new equipment. The player moves in the real world using a smart phone. Here enemies or chests are blended on top of the camera image, depending on the orientation of the Smartphone and the geographical location of the player.

First a user needs to register and to create an account before playing the game. From that point, all actions a participant might take while logged in to the game are recorded and monitored at the server. Once the game starts, the players can either play the game alone or in a group with other people.

The game uses the real word – in the test scenario, the game is limited to Graz (Austria) – and divides it into small areas. Players can only see and interact with enemies that are in the same area they are currently in. Figure 14 shows an example of areas and assigned creatures in the Admin user interface.


Figure 14 – Areas of Interest and Creatures in Admin UI

Areas serve multiple purposes in the game:

Firstly, areas can possibly form the organizational basis of distributed load balancing for game servers. These areas are referred to as Central Server areas (CSAs). In large MMOGs game areas are divided into multiple regions which themselves are assigned to one or many servers that handle all game procedures that happen within these regions. Thus, Areas in the prototype might be usable as natural units that can be compared to MMOGs regions.

There is a total of two types of areas in the game:

On the one hand, areas of interest (AOI) cover places in which a player interacts with enemies. A player cannot see enemies that are located in these areas unless he entered them in advance. Interactions between players can only happen if all of them stay within the same AOI. Once a player leaves such an area, creatures stop fighting him and go back to their start positions or fight other players that are still in reach. As most of the critical game actions – like fighting – happen within these areas, fast data synchronization between players and the servers are of greater concern. Unlike CSAs, AOIs can be widespread and do not need to border each other. Therefore, it is possible that players which are logged in are not located in any specific AOI while traveling the world. Further, as an AOI can only be assigned to a single CSA only, it must not overlap multiple CSAs.

On the other hand, CSAs cover the whole playing surface of the game and are each directly linked to one central game server. Each CSA can cover several AOIs, though. Every joined player is connected to a CSA that covers his or her current location. In addition, every CSA is linked to its surrounding CSAs. Once a player is about to leave his CSA, the currently active CSA server checks the direction the player is heading to and sends the physical address of the next areas’ server. Figure 15 shows several CSAs and AOIs in the administration tool.

Figure 15 - CSAs containing several AOIs

Secondly, as the application does not know about the real environment, AOIs can be used to take natural limitations to the sight into account. By default, the application would display creatures even if they were behind a building. By using AOIs these problems could be solved. This principle is further outlined in Figure 16.

In the picture, the current player is displayed as the green figure. Here, it can be seen that withouth using AOIs, the application only knows about distance and direction to nearby enemies. It can not determine on ist own, however, if there are any physical barriers that would prevent the user from actually seeing these objects. To encounter this problem, only objects that are placed within the same AOI as the user are displayed. As Figure 16 suggests, even if objects are within the same AOI, it is not certain whether the object would be hidden if they were real. Therefore, virtual lines need to be computed that range from the current position to each object that is placed in the current AOI. If these lines cross the border of the polygon-shaped AOI, they are considered to be out of sight.

Thirdly, areas work as a performance optimization tool on the client devices. Creatures that are outside of the current AOI cannot be interacted with and are not displayed at all. This way the game administrator can assure that not too many objects need to be rendered and computed at once. To view objects that are located in other areas is only possible using the spyglass mode in the game. This mode lets the user zoom the device camera image and the 3d scene, but does not allow for any further interaction. There are other solutions of reducing the number of visible objects, however. The simplest approach to this, for example, would be to only show any virtual objects that are within a certain distance to the current location. While this approach would be easier to implement and to maintain, it would not work together quite well with the game design.

After all, the main aspect of the game is to play in a party of up to five people. The virtual champions need to use their skills to maximize the strength of the group. Hence, every member of a party should take one or several particular roles by focusing on some skills that are available in the game. While the game is focused on playing with a predefined set of people, a player is always free to leave or join a group as long as no game limitations (e.g. max five people in a group) are exceeded. Although in some situations groups might join together to fight extraordinary strong enemies, direct interaction with other players is only possible with players in the same group.

In this respect, interactions with team members mainly occur when a player heals or improves avatar statistics – like defense or attack damage – of one or more party members. The exact position of the party members is not taken into account in the game as the GPS system is not accurate enough anyway, although players need to stay within a certain range to be members of a particular party.

Therefore, players are always connected to a global server, specified as the Central Account Server (CAS) to which they logged in. Via this server someone can find and chat with other players and form parties. As long as a player is in a specific area, he or she is connected to the respective CSA server in addition.

Every user-performed action is sent to and evaluated on this CSA server. If necessary, the area server refuses or accepts the action and informs other players in the area. These actions are implemented a JSON-formatted[60] text messages that include the sender id, the action name, and if needed a target id and additional arguments. The server or other targets interpret the action name and start their own logic accordingly. This can be seen as an action-based networking mechanism in which the current state of the application on a device is never transmitted. To ensure that the program runs the same on the clients and on the server, the application logic is abstracted and device-specific code is implemented by sub classing. While this methodology assures that the application logic can run the same way on different back-ends, it cannot guarantee that messages arrive in time or in the right order. For this, as described in chapter 4, state-maintaining mechanisms are implemented to assure the existence of a consistent game state across all players that are connected to an area server.

3.3    The game concept

Once a user has registered for an account, he or she can simply login via the start menu of the game. After logging in to the CAS, the user is delegated to the CSA server that is responsible for the area the user is currently located in.

3.3.1    General Information

At this point users do not take part in the game yet. They will be informed if there are enemies nearby and about the direction they are coming from.

If players think that starting the game now is too dangerous, they can just keep walking away until no enemies are nearby to safely join the game. This act of starting the game is referred to as spawning into the game world.

Furthermore, some areas are marked to be restricted from spawning. Thus, they can only be accessed when the user spawns in adjacent areas and trespass to the restricted area. Finally, users can only spawn in places that either are marked as an AOI or are dedicated spawning points.

After a user logged in, he or she can join a single party of up to five people. This is possible before or after spawning. Playing together with other players is an important part of the game as most advanced tasks cannot be solved by a single person. Thus, players either solve simpler tasks on their own or join other people. As long as players stay together, they keep being part of the group. However, a player can leave a group whenever he or she wants. Further, players that log out of the game are removed from their current group immediately. Once a player joined the augmented world, be it alone or in a group, he or she can just walk around and explore the world.

Furthermore, thought is also given to a concept of predefined tasks that users can solve. These so called quests are a common concept in role based games. In the application, stories could be written that can be experienced by all players. Usually, they present a set of one or more tasks that players can solve in exchange for receiving new items. While it is most likely that this would increase the user experience of the game, it is not necessary to prove that complex LBAR. Building a quest-based application module was still too time-consuming and there this was not finished. At the present time, only parts of the admin user interface are accessible where administrators can create new quest that are linked to geographic locations. Furthermore, items can be dragged into the quest reward section, as can be seen in Figure 17.

Figure 17 - Quest Administration Interface

3.3.2    Game States

The game is organized into different game states that limit visualization and interaction possibilities for specific tasks. In every game state the camera image is shown on the display. Like in most games further information is displayed in real time according to the current game state.

As the application is developed for the Android platform, the state transitions are, at least partially, adopted to specific application guidelines. In this context, buttons that are available on all Android devices need to be considered in implementation of the game transition handling. Hence, the back button is commonly used to navigate back from a sub state to the application main state. Furthermore, the Android home button can always be used to terminate the current game session. Figure 18 gives a short overview of all game states and transitions between them. This chapter then continues with outlining all game states in detail.

Figure 18 - Diagram of all game states Login game state

This is the first game state that players will get in touch with when they start the game for the first time. Here, a dialog is shown in which users can enter their username and password to login, as can be seen in Figure 19.

Figure 19 - Login Game Screen

If users do not own an account yet, they can register once. Furthermore, user can select that they will login automatically when starting the game. If they do not choose to, they need to provide their username and account data every time they start the game. Spawn Game State

After a player logged in to the game, he or she gets informed about the approximate amount and distance of nearby creatures. As highlighted in Figure 20 nearby creatures are displayed as glowing points on top of a radar image. If the user is located in an area where he is allowed to spawn, she can join the game by tapping on the screen.

Figure 20 - Spawn Game Screen Main Game State

The application is set to the main game state once a player spawns in the game world. While being in this application state, a user can see and interact with all digital objects that are located in his current AOI. Two buttons are located at the lower end of the screen to either attack the currently selected creature or use a skill. Further, the user can change the skill by pressing the skill button for more than half a second. This way, the user will be presented with a list of all available skills he or she can choose from. Furthermore, this state presents the main application state from which the user can switch to any other application state via the game menu. Figure 21 shows a scene from the game in which the user faces two enemies.

Figure 21 - Main Game Screen Inventory Game State

With every killed creature, there is a chance that an item spawns at the location of its defeat. Players can then pick these items up and organize them while being in the inventory game state. This is shown in Figure 22.

A player can only switch in this state when he is either in no AOI or in special areas dedicated to changing equipment. In this state, the virtual character can be equipped with items that fit into specific slots linked to body parts (like left hand, chest, head etc.). Changes that are done in this game state are immediately reflected to both the CAS and the current CSA server.

Figure 22 - Inventory Game Screen Binoculars Game State

As long as the game is in the binoculars mode, all digital objects which are located within the current CSA region are visible (across all AOIs), but the user cannot interact with them. In this mode, the display can be zoomed to view objects that are further away. As this mode imitates binoculars, only a small part of the screen is visible. This behavior is displayed in Figure 23.

Figure 23 - Binoculars Game Screen Group Management and Messaging Game State

In this mode, players can form parties, split up from parties, and chat with each other. Every action they take is synchronized with the CAS and, depending on the action, to the respective other players. Figure 24 outlines the client’s interface.

Figure 24 - Group Manamgement Game Screen

3.4    Game Administration and Area Handling

A web-based administration tool is developed in conjunction with the prototype application to create and maintain the virtual game world. The tool is based on Google Maps[61] and enables the administrator to place and maintain CSAs, AOIs, creatures and treasures. Figure 25 shows an example of three CSAs that cover surrounding areas of the city Graz. In the picture, a selected CSA is highlighted in green and is adjacent to two other CSAs that are highlighted in blue.

On the one hand, as a player always needs to be connected to a CSA server, there is no free geographical space between each adjacent CSA. AOIs on the other hand do not need to be placed adjacent to each other as even if players are not in an AOI, they will be tracked and evaluated by the CAS.

In Figure 25 Graz is divided into two regions (green and right blue CSA) and aligned with a third CSA that covers parts of the city’s surroundings. Therefore two servers are handling the requests and game states for all players located in the inner city and a third one is responsible for players that are located outside of Graz.

Figure 25 - Selected CSA with two adjacent CSAs around Graz

Besides managing the virtual world, the admin interface offers basic functionality to debug the game. As the game network is based on HTML5 Websockets connections, it is possible to join a game via the admin interface. At the current time there exists no comprehensive user interface to simulate user actions, but by using the JavaScript console provided by modern browsers an administrator can send and evaluated game actions.

However, Web browsers could be used to create another kind of game client that uses an alternative way of determining the location and viewing angle of a player, just like in common 3d games.

4      Implementation

So far economical values, prerequisites of LBAR applications as well as the prototype application have been discussed throughout this master thesis. This chapter concludes the thesis with outlining basic technical principles that are necessary to create a massively multiplayer LBAR game. What is more, workarounds to current limitations are addressed.

The chapter starts by discussing the LBAR implementation and its restrictions. The chapter then continues with giving an overview about the implementation of data exchange mechanisms between devices and central servers by outlining WebSockets, an HTML5 specification of how to establish persistent network connections. Finally components that are needed to maintain a consistent virtual world between players that are located near to each other are reviewed. After all, these three topics present the basics of a working LBAR multiplayer game.

4.1    Device alignment in the virtual scene and the camera display

To display 3D objects on top of the camera screen accordingly, the viewport in the virtual scene needs to be aligned with the devices orientation and its geographical position in the real world. Computing the device orientation and using it to align viewport in the virtual scene is implemented first. The reason for this is that the correctness of this alignment can easily be tested together with the graphics library.

The best supported way of implementing this functionality is to combine accelerometer and magnetic field sensor (MFS) data. An advantage of this approach is that most Android based devices feature both sensors. Fortunately, the Android API already contains helper methods to create a rotation matrix from accelerometer and MFS data since Android 1.5 and exposes the essential methods through the SensorManager API [51]. Here, developers can find the method getRotationMatrix which needs raw acceleration and magnetic field data to compute a rotation matrix. Figure 26 shows the coordinate system of the matrix and how this can be translated into 3D space. This coordinate system is exactly the same as the one that is used in OpenGL ES. Hence, the application could measure the rotation of the device on all three axes by using this approach when using raw OpenGL ES.

Figure 26 - Coordinate System of the Device Orientation matrix [52]

Figure 27 - Cooridnate System of JPCT-AE [53]

As Figure 27 indicates, the coordinate system used in JPCT-AE is not identical to the one computed by the SensorManager, though. As this could be the case for a wide range of graphics libraries, the SensorManager provides the method remapCoordinateSystem. This way, rotation matrices that are expressed in different coordinate systems can be rotated.

Further, some phone devices feature gyroscope sensors in addition to accelerometer and MFSs. While an accelerometer measures all acceleration on a particular axis and includes acceleration of gravity, acceleration measurement by a gyroscope is not biased by gravity [54]. Whereas accelerometers in conjunction with MFS are more precise at measuring acceleration of an object in the long term despite including some noise, gyroscopes are better at measuring this data in the short term [55]. On the one hand, when used in mobile devices, accelerometer data will most probably include jitter due to trembling of someone’s hands. On the other hand, accelerometers can measure movement in space additionally. Thus, by combining accelerometer and gyroscope data, complex human motion can be tracked very precisely for both device rotation and device movement in space [56].

Either way, reading sensor data is a simple task in Android. Developers only need to register for the dataReceived event using the SensorManager API. Therefore, the method addSensorListener can be used to listen for any kind of sensor data. Additionally, when registering for sensor updates, it is possible to define the rate at which the state of the sensor is checked. A developer can then either chose one of four system defined sampling rates or he or she can specify the delay between the events. In the prototype application, the system defined sampling rate SensorManager.SENSOR_DELAY_FASTEST has been chosen. Unfortunately, when choosing a high sampling rate, noise in the sensor data caused frequent jerking movements in the virtual scene. In contrast, checking the sensors using larger intervals does not result in a seamless camera movement. Thus, after several test runs, the average values of the ten latest view matrices are combined to align the virtual viewport. This results in a smooth camera movement without delaying the viewport alignment for a long time. Most of this computation is done immediately after receiving new sensor data. However, as listening to the sensors is performed in a separate thread, the data evaluation needs to be synchronized with the main application loop before the data can be used to control the viewport of the virtual scene.

In the process of developing the prototype application, only accelerometer and MFS were used as the SGS does not ship with gyroscopes. When developing serious LBAR applications for a broad audience, including gyroscopes wherever possible would benefit people that own high-end devices by detecting short term moves more precisely than just using GPS data alone and therefore possibly increases the game usability. This should not provide any advantages in the game, though.

So far only the synchronization of the device orientation with the virtual viewport has been discussed. In the game prototype, all digital objects that are blended over the camera image are associated with a geographical location. Thus, these objects are displayed correctly when the current geographical location is calculated in conjunction with the synchronized viewport.

One approach to get an approximation of the current location is to use multilateration of radio signals between radio towers of the provider and the mobile phone. In this case, the time difference of arrival of signals emitted from the mobile phone to three – commonly referred to as mast triangulation – or more radio towers is computed [57]. This approach is less power consuming than using a GPS and starts up faster. The principle is high lightened in Figure 28.

Figure 28 - Mast triangulation between three radio towers [58]

Due to reduced accuracy, multilateration should only be used to enhance GPS startup performance but it cannot replace it, though. Therefore, the easiest way to obtain the current location of the device is to use GPS. In contrast to mast triangulation, GPS needs to be in range of at least three satellites in the orbit to calculate the current location. Although being more accurate than mast triangulation, the GPS location data can have fluctuations of one to ten meters even when no natural objects are between the satellites and the receiver. Atmospheric conditions and other signals can further cause inaccuracies of up to 30 meters [59]. What is more, accuracy can suffer even more if there are less than three satellites available.

To encounter this problem, several ways of improving GPS accuracy exist. Differential GPS (DGPS) uses a secondary stationary GPS receiver that is used to correct the measurements of the first receiver. Therefore, the geographical location of the stationary GPS receiver has to be known exactly. The stationary GPS receiver calculates the difference between its known position and the data received from the satellites and sends this correction data to the DGPS receiver. Thus, an accuracy of five meters or less can be achieved [60]. If this technique is not accurate enough, real-time kinematics (RTK) could possibly deliver accuracies of several centimeters [61]. It is similar to DGPS as it uses reference stations for location correction. Although mostly used in agricultural sectors and in surveying, [62] worked on a solution of implementing RTK using Low-Cost GPS and Internet-Enabled Wireless Phones in 2006. Unfortunately, a general implementation in mobile phones to make use of RTK does not seem to be available so far. Further, as implementing any of these aforementioned ways of improving the location retrieval would have taken too much time GPS was used without any advancement to create the prototype application.

On the Android platform, checking for changes in the geographical location of the device is as straightforward as checking the device orientation. Developers need to retrieve an instance of the LocationManager service and register for updates using the requestLocationUpdates method. Similar to sensor listeners, developers can define the rate at which changes to the geographical locations are retrieved. It is possible to define the minimum distance in meters that needs to be moved or the minimum interval between the events in milliseconds. In the application prototype, both values are set to zero to retrieve the GPS coordinates in the fastest possible way. While the accuracy of the GPS signal oftentimes appeared to be quite sufficient, measuring the altitude of the current location did not match the values retrieved from the Google Elevation API[62]. Thus, for testing purposes, a method to calculate the current elevation data after for the latest GPS position is implemented using REST and the Elevation API. Furthermore, just like other listeners in Android, receiving changes of the GPS signal changes is done asynchronously and needs to be synchronized with the main application thread.

Once the current location of the mobile device is detected, calculations need to be done to adjust the virtual scene accordingly. Therefore, the viewport is always positioned at the origin in the 3D coordinate systems of the scene, or possibly raised for a little amount from the ground to include the height of the user. As all objects in the 3D scene are associated with a geographical location, each of these objects is then offset by the difference between its location and the devices location. This process is repeated for all scene objects every time the GPS receives a location update. Now, by using the sensor data to align the viewport with the device orientation, the application blends all objects over the camera image correctly.

So far, aligning the virtual viewport with the camera orientation and positioning the 3D models by taking the current physical location into account has been described. To completely implement LBAR, the application needs to display the camera image behind the virtual scene. To accomplish this task, the Android platform provides a reference to the Camera manager object. In addition, the platform provides SurfaceView user interface objects that can directly be placed in Android applications. These objects can further be added to callbacks for drawing the camera image to the screen frequently. Although performing these steps is quite straightforward, complications occur when placing the virtual scene on top of the camera image. Whenever a 3D scene is drawn to the screen, the background first needs to be cleared which is usually accomplished by using a solid color. Hence, the camera image is not displayed. The only work around to this problem is to clear the scene with a transparent color. Unfortunately, configuring the openGL ES layer to allow transparent backgrounds did not work as simple as expected and the actual source code for implementing this feature is not entirely the same for all tested graphics libraries. As a consequence, implementing the openGL ES layer to be transparent is an error-prone task that becomes more cumbersome when implementing it in different libraries.

After all, the inaccuracy of the GPS for determining the current location represented the largest problem when building the prototype application. When first implementing the prototype application, it was expected that the positions of all other players in the area were known quite accurately so that players, who someone wants to interact with – to heal them for example – can be selected directly by clicking on them in the camera image. Sadly this was not possible to implement as both the own geographical location and the location of the other player can differ up to several meters from the actual locations. Then, another solution of selecting other players was to display images of players on the screen that can be used as buttons. Therefore the party membership system was implemented in the game. Mobile devices offer limited screen dimensions, hence the party size was limited to a maximum number of five people so that every member can be displayed even on small screens without the need of scrolling.

4.2    Data exchange between mobile devices and servers

Chapter 4.1 explained how the location and the orientation of the mobile device are calculated to view virtual objects in the 3D scene. By using the graphics library that has been chosen in the valuation process it is possible to develop a game that places enemies on specific geographical locations and display them on the screen. Thus, creating a single player game for Android based devices is possible. To enable the game participants to form parties and share their game experience, however, more functionality needs to be implemented.

Therefore a client-server architecture is built up in which every client establishes persistent connections to one or more central servers. By using this architecture, a client is first connected to a CAS whose only purpose is to manage clients that log in the current game session, to calculate and to save any changes to account data, as well as to redirect them to the responsible CSA server. Whenever players login to a new CSA server, the server needs to request their account data from the CAS. Then, if this procedure is successful, the CSA server accepts the new clients.

From this time, most user actions are forwarded to the server. In this context, the CSA server evaluates and validates game actions exclusively. If the validation fails, the server responds with a rejection message. After receiving a rejection message from the server – depending on the criticality of the action – the client either undoes the requested action or simply does not execute the action at all if it required initial server permission in the first place. In contrast, valid actions are processed in three different ways. First, actions like attacking a creature are sent to the server. If the validation succeeds, the server further broadcasts the action to all party members and to each other player who is the same AOI as the player who invoked the action. These actions most usually change the state of the virtual game scene which is maintained by the server. Second, actions like sending a message to another player are only forwarded to the specific client. Third, actions that change a user account, like picking up items or equipping them, are usually not forwarded to other players at all. These changes are only sent to CAS – and to the current CSA server respectively – to make them persistent.

Additionally, the CAS is informed every time the location of a player changes. Further, once a player leaves a CSA, he or she will be redirect to the server that is responsible for the new CSA. After a connection to the new CSA server is established, however, the connection to the old server is terminated immediately.

In the prototype scenario this client-server architecture is realized using web servers and all information is sent over the network of providers of mobile communications. There is no direct communication between client devices. These connections work reliably during tests most of the time by maintaining an average delay of 350 milliseconds in urban environments. Occasional delay peaks of one or several seconds arose, though. Hence, these network stability problems will further be addressed in chapter 4.4. The next chapter goes into detail about how the communication between clients and servers exactly takes place.

4.3    WebSockets

In the prototype application, web-servers running Apache Tomcat 6.0.16[63] are used as CSAs and CAS servers. Because web servers are usually not used to maintain persistent network connections for a long period of time, the JWebSocket[64] server plug-in has to be installed. This plug-in enables an HTTP server to establish persistent TCP connections between web-servers and client devices using websockets[65].

As a part of the upcoming HTML5[66] specifications, the websocket specification defines protocols and interfaces of full-duplex single socket connections over which messages can be sent between client and server [63]. This specification emerged to address problems and limitations of current client-push methods that frequently establish and close http connections for single data transmissions. By these a client-push approaches, an HTTP request needs to be performed for every single information exchange action, including handshakes and sending header protocols. This might not make a huge difference when exchanging large files, but if the client only sends or receives small messages the size data transmitted could even be multiplied due to additional header information that is transmitted all the time. As a consequence, websockets reduce the most part of unnecessary traffic and latency that occurs on polling and long-polling solutions like AJAX[67] and COMET[68]. Prior to websockets, these techniques were used for websites to simulate full-duplex connections by maintaining two open connections concurrently. In contrast to these aforementioned solutions, websockets can be used to create high-performing server-push applications. This means that the server can send information to the client even forcing the user to reload the whole webpage. This way, web servers can either simply forward data between clients or, in more advanced solutions, can run custom application logic asynchronously and inform connected user at specific events.

Furthermore, websockets can automatically set up tunnels to pass through proxies and – similar to Hypertext Transfer Protocol Secure[69] – can use Secure Sockets Layer (SSL)[70] to establish secure connections via HTTPS [64].

So far, Websockets are supported at least in Apple Safari 5.5[71], Google Chrome 4+[72], Mozilla Firefox 6.0[73] and iOS 4.2. Although Websockets are not supported in all versions of the Internet explorer, full functionality can still be added by either using the Google Chrome Frame[74] or an Adobe Flash based Websocket bridge.

Initially, Websockets were intended to develop real-time, event-driven web applications [65]. Any device or application can be used as a client, though. In the prototype application scenario, web applications are configured to directly communicate with client applications, for example, to send mock data from the administration tool to clients for testing purposes.

Further, the JWebSocket implementation for Android allows creating native applications that use SSL and can exchange data encoded in JSON, XML, or in binary format [66]. Further reasons for using websockets are that they are a standardized – although currently not finalized – way of establishing persistent connections for data transfer using a client-server approach. What is more, gaming servers that use websockets can easily integrate web administration interfaces as well.

4.4    Consistency Management in Networked Real-Time Applications

In a general context, [67] describe interaction in games with tightly-coupled interaction and explain the term as “shared work in which each person’s actions immediately and continuously influence the actions of others”. They further outline with competition, external events, and expert collaboration three main scenarios in which tight coupling occurs. In this context, competition refers to real-time competitive activities like fighting games in which players can have advantages over their competitors by having a better reaction. In situations driven by external events, “groups must coordinate their actions at the time scale of the external events”. In this regard, an event can be enemies that arrive in the virtual scene. Scenarios like these are most common in the prototype application. Finally, expert collaboration defines a scenario in which people could slow down the chain of interactions. All of these scenarios are common in the real world but are difficult to handle in software applications.

During sessions in networked RTAs users possibly have inconsistent views of the common virtual scene which is caused by communication delays across the network. Thus, consistency maintenance algorithms (CMA) must be used to have a uniform view of the virtual scene among all participants [17]. These CMAs need to be implemented in most real-time applications that send information over networks and in which the order of the performed actions matter. As a consequence, text-messaging RTAs like Skype[75] do not necessarily need to implement an algorithm in contrast to applications in which one user gets an advantage or a disadvantage when his actions are scheduled in the wrong order, such as two players that fight each other in a first-person-shooter. In the prototype application there exist several situations in which the right order of computing game actions matters. For example, whenever an enemy is defeated, there is a possibility that it drops a box which contains an item. This item can be picked up by clicking at the box on the Smartphone display. Thus, the player who picks first gets the item, while others do not get anything.

Depending on the type of the RTA, different CMAs exist that can serve multiple purposes. For example, Local Lag resynchronizes “the local visual feedback and remote visual feedthrough by delaying local feedback so that it is shown at the same time for both local and remote sites” [67 p. 448]. According to their findings, “Local Lag provides substantial protection from the negative effects of network latency in the game task – particularly at latencies up to 200ms”. Further there did not seem to be a large drawback in application reaction time between delaying through local lag and giving immediate feedback [67 p. 448].

In the prototype game, CMAs are responsible to guarantee consistency of the game state without affecting the gameplay and the responsiveness of the application. Unfortunately, keeping the state of the game consistent and maximizing the responsiveness will probably not work in the long run. [68] point out that there exists an important tradeoff relationship between the responsiveness of an application and the appearance of short-term inconsistencies. They further argue that decreasing the responsiveness to a moderate level might encounter these short-term inconsistencies.

While delaying the visual feedback might be an appropriate technique for devices that are connected via wired networks, it would possibly lead to unpredictable application behavior on mobile devices. In general, this is caused by a higher delay and more extreme delay variations in wireless networks. The following section lists possibilities of implementation of CMAs that might be used in mobile games, though.

The Local-lag approach delays any operations carried out by a user for a certain amount of time before it is executed. Here, the duration of the delay needs to be long enough to reduce the number of short-term inconsistencies. [68] propose to use the maximum average network delay among all synchronized devices. This is considered in the context of devices in wired networks, which [68] consider to have a delay of 1ms for a LAN, 20-40ms within a continent and 150ms world-wide. They add, however, that the actual numbers still depend on how much response time a user can tolerate.

The local-lag approach aims at reducing inconsistencies without taking the game characteristics into account. In contrast, dead-reckoning presents an approach of combining state prediction and state transmission. Thus, information about the current state of objects is transmitted in conjunction with parameters that can be used to forecast future behavior when no user interaction occurs. Therefore application components that are used to implement this algorithm need to include game characteristics in their computations. Examples of this are airplanes that fly at a constant speed in a specific direction or projectiles that are influenced by physical law [69]. By using dead-reckoning, applications can exchange some information at a lower frequency as the state between updates could be estimated. This algorithm can only be used with objects that have a predictable behavior, though.

In [17 p. 1], Khan et al. use a “dynamic and adaptable approach for local-lag and dead-reckoning in which the parameters are changed according to the changing and unpredictable network and game environment”. To ensure consistent game states throughout a game session [17] use a combined and dynamic approach by using appropriate algorithms depending on the current network state, the type of data that will be sent, the state of the object that is influenced as well as the state of the current virtual world.

Trailing State Synchronization (TSS) and Time Warp Synchronization (TWS) are another two approaches to handle the correct timely processing of game actions of other players. While the local-lag approach intends to eliminate inconsistencies and dead-reckoning is used to regulate data transmission intervals, the aforementioned approaches handle situations in which inconsistencies already occurred. As local-lag cannot completely prevent short-term inconsistencies [68], it seems highly recommended to use it in conjunction with one of these approaches. In this context, short-term inconsistency means that the states of two or more sites – usually game clients – in a distributed application are different from each other when an operation that happened at one site arrives at all other sites after the time they should be taken into account by the application logic on these other sites. For example, when a player attacks a creature in the game prototype, an action message indicating the attack is sent from the client device to the server. Here, the central application logic checks if the action is valid and, if successful, starts the appropriate application logic for this scenario. Thus, the creature is programmed to run to the attacking player and to strike back. When this application logic starts, the server broadcasts to all players in the same area that the creature is starting the aforementioned attack mode. Due to messaging delay, these broadcasts arrive at the clients when the application state on the central server already changed as it keeps computing the game state after broadcasting. Now, the current application state differs between the server and all clients. This difference in application state is called short-term inconsistency needs to be treated with.

In TWS, each user saves the state of the virtual scene at certain times. Furthermore, all occurring game actions that were invoked afterwards are logged and sorted by the time they were executed. When inconsistencies occur, the state of the virtual scene is set back to the saved state and the inconsistent action is inserted in the log at the right time. Then all actions in the log are executed in their sort order immediately whereas only the end result is shown to the user. Drawbacks of TWS are that the approach needs complex application logic and can possibly consume a large amount of memory and is computational expensive when a consistency occurs [69].

TSS [70] implements rollback mechanisms like TWS by maintaining copies of older states of the virtual scene called “trailing states” which all have a different delay that is called execution time, and the current rendered state called the “leading state”. All states have a pending list that contains future game actions ordered by timestamp. New game actions are put on each of those lists and get executed on each state with regards to their execution time. In this manner, trailing states are used to detect and correct inconsistencies. When such an inconsistency is detected, a rollback from the leading state to the correct trailing state is performed [70]. Figure 29 demonstrates this approach.

Figure 29 - TSS Terminology

Although TSS uses rollback mechanisms like TWS it might have a better performance in certain situations [70; 17]. Furthermore, [70] found it to be especially useful in first-person shooter games like Quake[76].

Then again, both TWS and TSS are computationally expensive approaches that cannot be guaranteed to be working efficiently on mobile devices with limited processing power and memory. Modern smart phones will probably not suffer from processing limitations in this respect, though.

With this in mind, the CMA implemented in the prototype application is based on the local-lag approach. Instead of waiting a fixed amount of time, a client application waits for a confirmation of the server for invoked game actions that are marked to be time sensitive and time critical. As a result, actions that would be broadcasted but are not consistent with the current game state, like attacking a creature that is already dead, are discarded at the server and refuse actions are sent back to the original clients. This approach follows the idea of [68 p. 6] who explains “It would not be desirable to move from one extreme, where the response time is zero but short time inconsistencies are frequent, to the opposite extreme, where almost no short-term inconsistencies occur but the response time is unacceptably high”. Further, some actions like changing the current equipment are invoked immediately on the client device without waiting for a server response.

On the one hand, by using this approach, the response time is always similar to the network delay of the individual device and the network traffic is increased due to additional server responses as well. On the other hand, users with higher delays have a disadvantage as their actions get executed slower, although this is not an important drawback as the game design is based on players to fight together against computer controlled enemies (PvM) and not to fight each other (PvP).

Thus, it is possible to implement algorithms to favor players in situations where it is not certain that the network delay would cause drawbacks to users. In some situations, actions that would have negative effects to the users will be delayed by the largest amount of delay of any user in a specific region so that even players with a high delay can still react on time. Furthermore, some information like the position of flying arrows are not permanently calculated on the server and sent to the clients but just sent with additional information, like speed and target, to the devices once at the beginning and then calculated, in a similar way as in the dead-reckoning approach.

4.5    Discussion

When testing the prototype application, it seems that neither processing power nor internet connectivity clearly limit AR applications, at least when using higher-end devices like the SGS. In contrast, precise location determination of the phone solely using GPS caused more problems than initially expected and did not interplay with the game design very well, which takes the position of players into account for determining visibility and interactivity of game objects. The following sections will go into detail on the findings and impressions when building and testing the prototype application.

In the test scenario, more than ten animated models can be displayed and interacted with without dropping the frame rate below 30 FPS. Here, Android application performance is affected by the phone hardware as well as the Android OS implementation. Overall, three different Android OS versions were installed on the phone device (Android 2.1-update 1, Android 2.2, Android 2.3.3) in the course of this project, while after each update the application seemed to run a little bit more smoothly. Although no official test comparisons of application performance between OS versions have been conducted in the course of this thesis, some improvements in application performance were noticeable. For example, when blending the Android keyboard over the running game prototype by focusing an input field, inserting characters or signs were delayed for a short time in Android 2.1 and Android 2.2, which changed with Android 2.3.3. This might be caused by either a better integration of the Android OS itself or a better implementation of Samsung for the SGS. Either way, the tests showed that the prototype application performs well on the SGS and it is quite certain that a large number of the currently available Android based Smartphones could run this prototype as well.

When it comes to creating networked applications, connectivity and low network delays play an important role to the user. This is especially the case for competitive multiplayer games in which someone can get an advantage when a fast reaction is needed to succeed. [71] mention that delays which exceed 250ms are not tolerated by users while they still can get used to delays of 150 to 200ms.

To measure the approximate network delay of the prototype application, a persistent connection using WebSockets is established for sending requests every 500ms to the server and stored on the mobile device to do further investigations. To establish a connection and send data, the mobile phone network of Hutchison 3G Austria GmbH[77] is used for all tests that are carried out. The stored data indicated that network delay of a running application mostly ranges from 165ms to 250ms in urban areas with good connectivity, and incorporate occasional disorders that raise the delay to up to 900ms and above. On average, these results show that delays in the prototype application are close to what [71] refers to as not being tolerable by users, at least if they occur over an extended period of time.

However, in the context of this thesis, there are two reasons why the network capabilities are not considered to limit the application usability. Firstly, people probably know that phones have certain limitations and might have fewer expectations in advance. Secondly, the application is considered to be a cooperative game in which human players join to fight AIs. In this case, unlike in games that support PvP modes, it is probably fine to give the illusion of a fast data connection when the user triggers critical actions in the game although this is not the case. In one example a party in which one of the members is about to die and the healer of the party, who could save him if casting his spells in time, has a high delay and cannot react fast enough. Here, the death of a party member is delayed for as long as the delay of the slowest party member, giving all players a slight advantage. This way, users can play the game as though there is no delay. This does not remove the delay of the visual feedback, though.

That mechanism cannot be implemented for all game actions, however, but at least for the most critical ones that can have a great impact on the game and which put players at a disadvantage when having large delays. Thus, games that are based on collaborative gameplay possibly allow for better delay handling as long as they prefer actions that are made by the players. In the end, users are probably more forgiving when they do not get a disadvantage.

In contrast to application performance and network delays, inaccuracies of the GPS impose a particular problem. Initially, the location of the own device and those of all party members were supposed to be used for both defining the current areas the users are in and for displaying an icon above other players on the screen to highlight them and to enable a player to select them by clicking at that sign. Due to GPS inaccuracies of up to several meters, for both the own location and the location of other devices, no appropriate solution could be implemented. Thus, an expandable menu that lists all party members was implemented instead of selecting party members directly by clicking on them on the screen. However, to effectively interact with party members, all of them should be displayed on screen. This does work with larger screens like the four inch display of the SGS, but it might not work on smaller devices. With this in mind, game developer should take care about small screens in particular when developing AR games that are based on interaction with other game participants.

As opposed to the selection process, which is more of a usability issue, determining in which area a user currently is has a fundamental impact on the gameplay. In the client application, areas are used to determine what objects are displayed and can be interacted with. As a consequence the user experience could suffer dramatically if the GPS location would fluctuate when a player is near to the edge of an area, possibly resulting in hiding and showing nearby enemies at an unpredictable rate. In rare situations, this might even appear as some sort of graphics flickering to the user. This drawback can be considered as fatal and would possibly force users to quite the game. This can be avoided by either improving the GPS accuracy, as discussed in chapter 4.1, or by adjusting the game design. Furthermore, it might be possible to design the virtual world in a way that situations in which a user is near to the edge of an area when interacting with game objects hardly ever occur.

5      Conclusion

The game prototype AR Legends demonstrates that current above average performing Android Smartphones are capable to run MMMGs and use AR technology. Therefore MMOG specific features such as central account management and load balancing are implemented rudimentarily in the prototype on both client and server side. Despite the fact that the game does not provide the same level of detail for both graphics and game depth as current state of the art games do, it can be clearly said that MMMGs using AR can be run on current Smartphones.

Although Massively Multiplayer LBAR games are probably not going to be as popular and widespread as MMOGs are now on desktop machines, they could at least be considered as an interesting alternative to people who like to play Geocaching or similar games in a more advanced way. This does not mean that LBAR games are limited to a single game genre, though.

On the one hand, there was a large number of drawbacks encountered with Smartphones to develop and test the prototype application. Some of them, like inaccuracies of the GPS and delay fluctuations in mobile networks, could possibly be fixed, while other problems like small screen sizes of Smartphones probably cannot be fixed. On the other hand, using AR on mobile devices could still form the basis of new and interesting game concepts. With an increasing number of Java game development frameworks that let developers write applications for desktop and android devices with nearly the same code, it may be possible to create a desktop version of a game and an additional android application that uses features like AR to deliver a completely new game experience without creating a new game.


1. Azuma, Ronald T. A Survey of Augmented Reality. In Presence: Teleoperators and Virtual Environments. 1997.

2. van Krevelen, D.W.F. and Poelman, R. A Survey of Augmented Reality Technologies, Applications and Limitations . The International Journal of Virtual Reality. 9, 2010, Vol. 2.

3. Harm, Robert. Webtermine. Layar.com. [Online] 2011. [Cited: July 19, 2011.] http://www.layar.com/layers/webtermineat.

4. Find your Facebook Friends with Guidepost Mobile. Junaio Blog. [Online] March 21, 2011. [Cited: December 19, 2011.] http://junaio.wordpress.com/2011/03/21/find-your-facebook-friends-with-guidepost-mobile/.

5. artoolkit. artoolkit. [Online] tarienna GMBH, 2011. [Cited: July 8, 2011.] http://www.artoolkit.eu/produkte/nyartoolkit.html.

6. qualcomm. qualcomm. [Online] October 4, 2010. [Cited: July 8, 2011.] http://www.qualcomm.com/news/releases/2010/10/04/qualcomm-announces-availability-augmented-reality-sdk.

7. Another Sony Smart AR Demo. YouTube. [Online] Mai 22, 2011. [Cited: July 19, 2011.] http://www.youtube.com/watch?v=P0dNYarLeFA.

8. History of Mobile Augmented Reality. History of Mobile Augmented Reality. [Online] [Cited: August 2, 2011.] https://www.icg.tugraz.at/~daniel/HistoryOfMobileAR/.

9. Nintendo 3DS preinstalled Software. Nintendo 3DS preinstalled Software. [Online] [Cited: August 2, 2011.] http://www.nintendo.de/NOE/de_DE/ar_games_erweiterte_realitaet_32271.html.

10. PS3: The Eye of Judgment. [Online] May 18, 2011. [Cited: August 3, 2011.] http://www.apes-land.de/blog/?tag=the-eye-of-judgment.

11. Bald Invizimals Nachfolger? PSPFreak.de. [Online] August 11, 2010. [Cited: August 2, 2011.] http://www.pspfreak.de/2010/08/11/bald-invizimals-nachfolger/.

12. Invizimals Commercial. Invizimals Commercial. [Online] [Cited: August 2, 2011.] http://www.youtube.com/watch?v=TvDWleKmhYs.

13. Feretti, Stefano, et al. FILA in Gameland, A Holistic Approach to a Problem of Many Dimensions. [Paper] University of California, Los Angeles; Università di Bologna : s.n., 2006.

14. Total Active Subscriptions. mmodata.net/. [Online] October 2011. [Cited: December 19, 2011.] http://mmodata.net/.

15. Screendigest. Screendigest. [Online] [Cited: July 19, 2011.] http://www.screendigest.com/reports/2010822a/10_09_subscription_mmogs_mixed_fortunes_in_high_risk_game/view.html.

16. Activision. Activision. [Online] [Cited: July 19, 2011.] http://investor.activision.com/releasedetail.cfm?ReleaseID=575495.

17. Khan, Abdul Malik, Chabridon, Sophie and Beugnard, Antoine. A dynamic approach to consistency management for mobile multiplayer games. [Paper] Lyon, France : ACM, 2008. 978-1-59593-937-1/08/0003.

18. Group, Information Solution. 2011 PopCap Games Mobile Phone Gaming Research. [PowerPoint] s.l. : Information Solution Group, 2011.

19. Farago, Peter. Blurry Analytics. [Online] November 9, 2011. [Cited: November 27, 2011.] http://blog.flurry.com/bid/77424/Is-it-Game-Over-for-Nintendo-DS-and-Sony-PSP.

20. Parallel Kingdom Benutzerzahlen. Parallel Kingdom. [Online] [Cited: July 18, 2011.] http://www.parallelkingdom.com/PR_061511_500KPlayers.aspx.

21. GameLoft Sales Figures. GameLoft Sales Figures. [Online] [Cited: July 19, 2011.] http://www.gameloft.de/download-spiele/news/1633-order-and-chaos-online-generiert-20-tage-nach-veroffentlichung-im-app-store-1-million-us-dollar-umsatz/.

22. unrealengine features. unrealengine.com. [Online] Epic Games, Inc., 2011. [Cited: December 15, 2011.] http://www.unrealengine.com/features.

23. Unigine Corp. Licensing. Unigine Corp. [Online] Unigine Corp., 2011. [Cited: December 15, 2011.] http://unigine.com/products/unigine/licensing/.

24. UNIGINE Engine. UNIGINE. [Online] Unigine Corp., 2011. [Cited: December 15, 2011.] http://unigine.com/products/unigine/.

25. Technologies, Unity. Unity3D Shop. Unity3D. [Online] Unity Technologies, 2011. [Cited: December 15, 2011.] https://store.unity3d.com/shop/.

26. Nilsson, Tobias. Create games for Xperia™ PLAY with the Unity engine. developer.sonyericsson.com. [Online] SonyEricsson, December 13, 2011. [Cited: December 16, 2011.] http://developer.sonyericsson.com/wp/2011/12/13/create-games-for-xperia-play-with-unity-engine/?utm_source=rss&utm_medium=rss&utm_campaign=create-games-for-xperia-play-with-unity-engine.

27. SonyEricsson. SonyEricsson Xperia Play. http://developer.sonyericsson.com. [Online] Sony Ericsson Mobile Communications, 2011. [Cited: December 16, 2011.] http://developer.sonyericsson.com/wportal/devworld/technology/android/xperiaplay/overview?cc=gb&lc=en.

28. Gartner Disruptive Technologies. [Online] [Cited: July 24, 2011.] http://www.gartner.com/it/page.jsp?id=681107.

29. Gartner Android OS distribution. Gartner Android OS distribution. [Online] [Cited: July 22, 2011.] http://www.gartner.com/it/page.jsp?id=1434613.

30. Alto, Palo. canalys. [Online] August 1, 2011. [Cited: November 28, 2011.] http://www.canalys.com/newsroom/android-takes-almost-50-share-worldwide-smart-phone-market.

31. Petty, Christy. Gartner. [Online] September 22, 2011. [Cited: December 13, 2011.] http://www.gartner.com/it/page.jsp?id=1800514.

32. Chaffin, Bryan. Gartner Projects Apple to Rule Tablet Market Through 2015. The Mac Observer, Inc. [Online] September 22, 2011. [Cited: December 13, 2011.] http://www.macobserver.com/tmo/article/gartner_projects_apple_to_rule_tablet_market_through_2015/.

33. Good Technology. Good Technology. [Online] [Cited: July 28, 2011.] http://www.good.com/news/press-releases/101007.php.

34. Press Releases - Flood of Apple iOS and Google Android Devices Revolutionizing Enterprise Mobility. GOOD PRESS FEED. [Online] October 7, 2010. [Cited: July 28, 2011.] http://www.good.com/news/press-releases/101007.php.

35. TIOBE Programming Community Index for August 2011. TIOBE Software . [Online] August 2011. [Cited: August 17, 2011.] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html.

36. Gartner Gaming. Gartner Gaming. [Online] [Cited: July 24, 2011.] http://www.gartner.com/it/page.jsp?id=1737414.

37. Onlinemarketing-Trends smartphone growth. Onlinemarketing-Trends smartphone growth. [Online] [Cited: July 24, 2011.] http://www.onlinemarketing-trends.com/2011/06/smartphones-to-grow-50-in-2011.html.

38. Gartner Internet access PC / Mobile. Gartner Internet access PC / Mobile. [Online] [Cited: July 24, 2011.] http://www.gartner.com/it/page.jsp?id=1278413.

39. Epstein, Zach. Android Market surpasses 500,000 published apps. BGR Media. [Online] BGR Media, October 21, 2011. [Cited: December 16, 2011.] http://www.bgr.com/2011/10/21/android-market-surpasses-500000-published-apps/.

40. Qualcomm Incorporated. Qualcomm: Augmented Reality. Qualcomm Developer. [Online] Qualcomm Incorporated, 2011. [Cited: December 16, 2011.] https://developer.qualcomm.com/develop/mobile-technologies/augmented-reality.

41. daily2news. daily2news. [Online] [Cited: August 3, 2011.] http://www.daily2news.info/850/why-was-the-samsung-galaxy-s-sold-in-over-5-million-units/.

42. ARM. [Online] November 28, 2011. [Cited: November 29, 2011.] http://www.arm.com/about/newsroom/arm-launches-free-toolkit-for-android-application-developer-community.php.

43. Android Developers. [Online] 2011. [Cited: November 29, 2011.] http://developer.android.com/sdk/ndk/index.html.

44. YouTube JPCT-AE Parallax Mapping Demo. YouTube. [Online] [Cited: August 8, 2011.] http://www.youtube.com/watch?v=5Zgn1OCDIus&feature=player_embedded.

45. JPCT Text Drawing. JPCT Text Drawing. [Online] [Cited: July 25, 2011.] http://www.jpct.net/forum2/index.php/topic,1074.0.html.

46. JPCT-AE AR. JPCT-AE AR. [Online] [Cited: July 25, 2011.] http://www.jpct.net/forum2/index.php/topic,1586.0.html.

47. JME Features. JME Features. [Online] [Cited: July 25, 2011.] http://jmonkeyengine.com/engine/.

48. libgdx Features. libgdx Features. [Online] [Cited: July 7, 2011.] http://libgdx.badlogicgames.com/features.php.

49. rsart. rsart. [Online] [Cited: August 4, 2011.] http://www.rsart.co.uk/2007/08/27/yes-but-how-many-polygons/.

50. badlogicgames.com. [Online] Badlogic Games, February 3, 2010. [Cited: August 11, 2011.] http://www.badlogicgames.com/wordpress/?p=58.

51. Android Developer Reference - SensorManager. Android Developer Reference. [Online] [Cited: August 9, 2011.] http://developer.android.com/reference/android/hardware/SensorManager.html#getRotationMatrix(float[], float[], float[], float[]).

52. SensorManager Class Overview. Android Developers. [Online] December 16, 2011. [Cited: December 19, 2011.] http://developer.android.com/reference/android/hardware/SensorManager.html#getRotationMatrix(float%5B%5D,%20float%5B%5D,%20float%5B%5D,%20float%5B%5D).

53. Coordinate system - jPCT's coordinate system. jpct.net. [Online] June 19, 2009. [Cited: December 12, 2011.] http://www.jpct.net/wiki/index.php/Coordinate_system.

54. differencebetween - gyroscope and accelerometer. differencebetween. [Online] [Cited: August 9, 2011.] http://www.differencebetween.net/technology/difference-between-gyroscope-and-accelerometer/.

55. diydrones blog - difference between Accelerometer and Gyroscope. diydrones blog. [Online] [Cited: August 9, 2011.] http://diydrones.com/profiles/blogs/faq-whats-the-difference.

56. YouTube - Motion sensing. YouTube. [Online] [Cited: August 9, 2011.] http://www.youtube.com/watch?v=s19W-MG-whE&feature=player_embedded.

57. searchengineland - cell phone triangulation. searchengineland. [Online] [Cited: August 9, 2011.] http://searchengineland.com/cell-phone-triangulation-accuracy-is-all-over-the-map-14790.

58. Smith, Chris Silver. Cell Phone Triangulation Accuracy Is All Over The Map. SearchEngineLand. [Online] September 22, 2008. [Cited: December 9, 2011.] http://searchengineland.com/cell-phone-triangulation-accuracy-is-all-over-the-map-14790.

59. maps gps info - GPS accurcy. maps gps info. [Online] [Cited: August 10, 2011.] http://www.maps-gps-info.com/gps-accuracy.html.

60. kowoma - gps accuracy. kowoma. [Online] [Cited: August 10, 2011.] http://www.kowoma.de/en/gps/accuracy.htm.

61. virtualworldlets - Augmented Reality. virtualworldlets. [Online] [Cited: August 10, 2011.] http://www.virtualworldlets.net/Resources/Hosted/Resource.php?Name=ARWillWork.

62. insidegnss - Mobile RTK. insidegnss. [Online] [Cited: August 10, 2011.] http://www.insidegnss.com/node/866.

63. websocket.org. [Online] Kaazing Corporation, 2011. [Cited: August 10, 2011.] http://websocket.org/index.html.

64. Corporation, Kaazing. websocket.org - All about websockets. [Online] 2011. [Cited: August 11, 2011.] http://websocket.org/aboutwebsocket.html.

65. —. websocket.org - scalability for the web. [Online] 2011. [Cited: August 11, 2011.] http://websocket.org/quantum.html.

66. Schulze, Alexander. jwebsocket.org - Android. [Online] [Cited: August 11, 2011.] http://jwebsocket.org/mobile/android/android_part1.htm.

67. The Effects of Local Lag on Tightly-Coupled Interaction in Distributed Groupware. Stuckel, Dane and Gutwin, Carl. San Diego, California, USA : CSCW, 2008. 978-1-60558-007-4/08/11.

68. Mauve, Martin, et al. Local-lag and Timewarp: Providing Consistency for Replicated Continuous Applications. s.l. : IEEE Transactions on Multimedia, 2004.

69. Mauve, Martin. Consistency in Replicated Continuous Interactive Media. New York, NY, USA : ACM, 2000. 1-58113-222-0.

70. Cronin, Eric, et al. An Efficient Synchronization Mechanism for Mirrored Game Architectures. MULTIMEDIA TOOLS AND APPLICATIONS. 1, 2002, Vol. 23, 7-30.

71. Pantel, Lothar and Wolf, Lars C. On the Impact of Delay on Real-Time Multiplayer Games. New York : ACM NOSSDAV, 2002. 1-58113-512-2.

72. SonyEricsson Product Page. SonyEricsson Product Page. [Online] [Cited: July 20, 2011.] http://www.sonyericsson.com/cws/products/mobilephones/overview/xperia-play?cc=de&lc=de.

73. Beigbeder, Tom, et al. The Effects of Loss and Latency on User Performance in Unreal Tournament 2003. Worcester : s.n., 2003. 1-58113-942-X/04/0008.

74. Order and Chaos Online. GameLoft.de. [Online] [Cited: July 19, 2011.] http://www.gameloft.de/ipad-spiele/order-and-chaos/?adid=27351.


[1] http://www.autodesk.de/adsk/servlet/pc/index?id=14642267&siteID=403786

[2] http://www.blender.org/

[3] https://market.android.com/details?id=com.rrrstudio.billiards

[4] http://www.layar.com/

[5] http://www.junaio.com/

[6] http://www.mixare.org/

[7] http://www.junaio.com/develop/docs/documenation/general/glue/

[8] http://www.hitl.washington.edu/artoolkit/

[9] http://www.artoolkit.eu/produkte/nyartoolkit.html

[10] http://java.com/de/about/

[11] http://msdn.microsoft.com/de-at/library/aa287558(v=vs.71).aspx

[12] http://www.adobe.com/de/devnet/actionscript/

[13] http://goo.gl/Xj4bu

[14] http://developer.qualcomm.com/dev/augmented-reality

[15] http://www.giga.de/news/00152780-smartar-sony-praesentiert-neue-augmented-reality-technik/

[16] http://at.playstation.com/psp/

[17] http://www.eyeofjudgment.com/

[18] http://www.scei.co.jp/index_e.html

[19] http://www.wizards.com/

[20] http://www.eyepet.com/home.cfm?lang=de_AT

[21] http://en.wikipedia.org/wiki/Tamagotchi

[22] http://www.pokemon.com/at/

[23] http://atlantica.nexon.net/

[24] http://www.lotro.com/

[25] http://www.parallelkingdom.com/

[26] http://orderchaosonline.com/, http://www.gameloft.de/ipad-spiele/order-and-chaos/?adid=27351

[27] http://www.emrosswar.com/games/emross-war/index.html

[28] http://disney.go.com/disneymobile/mdisney/pirates/screenshots.html

[29] http://www.tibia.com/

[30] http://www.unrealengine.com/platforms

[31] http://www.unrealtournament.com/de/index.html

[32] http://www.masseffect.com/agegate/?url=%2F

[33] http://unigine.com/

[34] http://www.nvidia.com/object/tegra-2.html

[35] http://www.nvidia.de/object/nvidia-samsung-galaxyr-superphone-press-20110811-de.html

[36] http://unity3d.com/unity/publishing/

[37] http://www.allegorithmic.com/

[38] http://blogs.sonyericsson.com/

[39] http://www.sonyericsson.com/cws/products/mobilephones/overview/xperia-play?cc=de&lc=de


[41] http://www.sony.de/product/sony-tablet-p/tab/overview

[42] http://www.samsung.com/at/microsite/galaxynote/note/index.html?type=find

[43] http://www.khronos.org/collada/

[44] http://www.jpct.net/jpct-ae/

[45] http://www.martinreddy.net/gfx/3d/3DS.spec

[46] http://www.martinreddy.net/gfx/3d/OBJ.spec

[47] http://tfc.duke.free.fr/coding/md2-specs-en.html

[48] http://aptalkarga.com/bones/

[49] http://www.jpct.net/others/skeletal.zip

[50] http://jmonkeyengine.org/, http://jmonkeyengine.com/

[51] http://jbullet.advel.cz/

[52] http://www.ogre3d.org/docs/api/html/classOgre_1_1Mesh.html

[53] http://ardor3d.com/

[54] http://libgdx.badlogicgames.com/


[56] http://www.bildburg.de/texturen/hautundhaare/tierhaut/drachenhauttextur.html

[57] http://glest.org/en/index.php

[58] https://developer.qualcomm.com/discover/chipsets-and-modems/adreno

[59] http://www.chipestimate.com/ip.php?id=20773

[60] http://www.json.org/

[61] http://maps.google.com/

[62] http://code.google.com/intl/de-DE/apis/maps/documentation/elevation/

[63] http://tomcat.apache.org/

[64] http://jwebsocket.org/

[65] http://websocket.org/

[66] http://www.whatwg.org/specs/web-apps/current-work/multipage/

[67] http://adaptivepath.com/ideas/ajax-new-approach-web-applications

[68] http://ajaxian.com/archives/comet-a-new-approach-to-ajax-applications

[69] http://webdesign.about.com/od/ecommerce/a/aa070407.htm

[70] http://www.elektronik-kompendium.de/sites/net/0902281.htm

[71] http://www.apple.com/at/safari/features.html

[72] http://blog.chromium.org/2009/12/web-sockets-now-available-in-google.html

[73] http://www.mozilla.com/de/firefox/features/

[74] http://code.google.com/intl/de-DE/chrome/chromeframe/

[75] http://www.skype.com/intl/de/welcomeback/

[76] http://www.idsoftware.com/games/quake/quake

[77] http://www.drei.at/portal/de/privat/Privat_Home_1.html