Vi sitter här i GIS och spelar lite DotA2

With increasing frequency archaeologists are turning to inter-disciplinary methods, techniques and platforms to shed new light on old problems, and develop novel methodologies to answer new problems. Video-games are one such area. In the past decade there has been an increased drive towards utilising game engines, physics and imaging techniques for visualisation, reconstruction and analysis within archaeology. These implementations have met with varying success, however one aspect that has been largely neglected is reciprocity – in other words, what does, or could gaming gain from implementing archaeological methods, techniques or theories? This short blog post will give one reciprocal application of archaeological methods to a video-gaming problem – superficially demonstrating that there are avenues for interaction which are mutually beneficial between the disciplines. The given application is just a bit of fun (albeit an extremely useful one for my DotA2 plays) but hopefully will open the way to larger, more sustainable reciprocative projects that have considerably more depth!

Some Background:

Part of my MSc course is a module in Geographic Information Systems. During this module we were continuously asked to think outside the box, to challenge how the data is implemented, how it can be manipulated and how it can be used – all very interesting and integral things to archaeological applications. During this time I was also transitioning from being a casual DotA2 player, to starting to take games much more seriously – engaging with the meta and game theory to up my personal and team play. A video regarding DotA can be found below, incidentally the title of this blog-post comes from a modified section of the lyrics (translates to somthing along the lines of ‘were sitting here on ventrillo playing a little dota’ which was changed to suit the purposes of this blog post: ‘were sitting here on GIS doing a little DotA2″).

For those unfamiliar with DotA2 (Defence of the Ancients 2) it is a Multiplayer Online Battle Arena (MOBA) style game developed by Valve as a sequel to the original DotA mod. In the game each player selects a discreet champion who has a set of discreet skills, which are used to collect gold, level up and fight opposing champions. The arena is divide between the Radiant and Dire sides with 5 heroes per side. DotA2 is regarded as having one of the steepest learning curves in any competitive video-game due to the vast combinations of skills and items as well as the complex visualization, xp and standardized meta. DotA2 has become Valves most popular game with spikes over 600,000 concurrent players a regular occurrence, and in excess of 7 million discreet users every month – all despite the mountain of effort it takes to go from complete n00b to competent enough that teammates don’t cry “plz report this timbersaw idiot omg rr” every two seconds.

Screenshot of Dota2. Including mini-map (left) from which the work was dititized, and a selection of in-game messages which demonstrate the multilingual and computer-literate nature of the player-base.

I never played DotA, or HoN or any other such game. Indeed I was a pretty adamant member of the glorious First Person Shooter (FPS) master race until getting introduced to DotA2 at i49 LAN in 2013 -where I was sat next to a highly ranked DotA2 player and got dragged into some team matches as a “space filler”. Despite not having a clue what I was doing the experience was fun enough to get me hooked on the game, and ever since I have been slogging my through the steep learning curve.

Fast-forward to last week. I was sat watching a replay of one of my games – in this particular game I was playing support and was responsible for warding (a process in which you place sticks in the ground which give you vision through fog of war (you can only see in a certain arc around your hero, outside of that its greyed out unless you have wards. This vision changes at night and per-hero and is blocked by cliffs and forests), or alternatively reveal hidden units within a given area). As I analysed I found myself pausing and hovering over the wards to see what my ward positioning was revealing and how that either assisted or degraded my laning (early phase of the game where you play in a set lane) experience.

Image showing wards (one circled in red) with a ward-vision circle around it (green)… note that the trees here do not show occlusion.

As I hovered over I noticed that the wards displayed the full circle of vision – which is not representative of the per-ward visible area due to topography and assets such as trees disrupting views. Which got me thinking – I have used a tool within archaeology which calculates visible areas based on elevations. A tool which can be used to code areas of the map, run cost-paths to figure out shortest or least-effort routes and calculate a range of other variables – this tool is of course a GIS program (I used ArcGIS, but there is also the open-source QGIS etc).

The Process:

So I began my process by deviously extracting a map from the in-game HUD (heads up display) and extracting unit measurements from the hammer-files (map builder). Unfortunately the map units were all in hammer-units which don’t really have a direct conversion to real world measurements (they kinda do, but its dodgy at best and didn’t work at all for this project) – So I created a small algorithm which transformed the map units into a workable set of parameters based on some assumptions regarding hero height, speeds and RL proxies. This data was then imported into ArcGIS where I arbitrarily referenced the map to the O2 stadium in London (the map space works out to be approximately the same size as the 02 including all its surrounding buildings and land). From here I digitized each section of the map to its use (ie. Is it used as a shop, or a lane, or a juking path), mapped in where all the static objects (camps, towers, necessary wards, runes etc) and then coded in elevation data (extracted from the hammer files and then converted into semi-believable real-world units). This elevation data was then interpolated to create a raster DEM.

After this I used the DEM and the land-use digitization to create a map of passable + non-passable locations to more clearly show areas that your hero can, and cannot walk.

Finally I used the DEM to start producing view-sheds. The first set of view-sheds that I ran were on what are commonly refered to as “necessary wards” – those wards which watch key areas of the map such as the Runes or inner jungle camps – the results of these were then ground-truthed in game and found to be extremely accurate (within 1/8 of a unit). The next set were run on the most common of the “situational wards” – wards which are used in certain situations, but are still common, such as those to block jungle camps from spawning or watch key walkways so you can predict fights etc . Finally I compiled a data-set which was every ward I placed during a game and conducted viewsheds on each of those.

The Results:

A quick overview of the map, digitized side, passable vs non-passable and viewsheds.

A quick overview of the map, digitized side, passable vs non-passable and viewsheds.

The first thing that came out of this was gaining a really in-depth understanding of the map. Despite the game being conducted in a top-down way I had never really engaged with the landscape academically – never studied it or tried to identify the landscape in a meaningful way, until doing this analysis I just played the game with the landscape as an incidental aspect. Following the analysis I discovered how imbalanced certain parts of the map are, got a in-depth perception of how far certain areas are away from others, how terrain affects movement over the landscape and all the small “juking” areas of the map which are difficult to locate during the “omg im going to die PLZ HELP TEAM OMG TIMBERSAW USE UR ABILITIES PLZ QQQQQQQ” moments. Hand on heart my gameplay has improved significantly since looking at the game as a landscape-archaeologist rather than just a gamer.

The next thing to note of the results is that it ratified why certain wards are deemed as necessary vs those which are situational – the patterning was equal parts to do with the amount of visibility produced from a single ward, but also on the type of thing  it was watching – the ward spots for watching rune are both located on up-slope areas so have fantastic views over river, but they also watch highly contested resources – placing the wards elsewhere would severely hinder a teams ability to contest or predict contestations. Those which are situational don’t so much cover the greatest area, or the greatest resources, but rather serve a particular function – in the case of blocking jungle camps they serve the function of preventing a camp from spawning, so the primary focus is not visibility and thus placing it on a tree-line (cutting out half the view or more) is in keeping with the purpose.

Through the analysis of my own ward placements in a single game it was shown that I was missing crucial spots through lazy and misguided placing – by sitting down and experimenting with how the views changed based on placement I was able to construct a table of “ideal” viewshed and “ideal” function factors per-lane, per-phase. As the meta of the game develops there will be more opportunities to test developing strategies and write them into the regression – but for now the outcome is that there is a workable set of the ‘best’ ward spots (where best is a factor of both visibility and function) which in turn means I will be receiving less “OMG CM WTF R U DOING, STPD WARD IDIOT, PLZ UNINSTALL” messages. It also means I am now able to gently educate fellow players with words such as “OMG Y U NO PLACE 2 UNITS TO SIDE, DNT U KNOW IT’S THE OPTIMAL SPOT SCRUB, UR MISSING 4% VIZ FROM THERE, PLZ GO COTCH IN JUNGLE USELESS SPRT”.

The Future:

So this is all actually really cool and interesting stuff for game-nerds. Being able to see where the best spots for wards are is amazing. Being able to test what you would be able to see if you put a ward in a particular place is mind-blowingly useful. And with a little bit of tweaking it could easily be made to show what a hero can see – a function which would help explain the outcomes of certain encounters, and assist in planning routes for future games. But this is too much responsibility for one person to dish out on a per-analysis basis. So work has started on making all the work available in real time, online. So basically you can drag and drop a point onto the map which will update to show the visible area of that point, and you can change to indicate if it’s a fixed viewshed of a ward, or a variable visibility of a hero (change for unity type and day/night cycle). Understandably this is more difficult that it would seem and so it might be awhile before full functionality is available, but for now I almost have a working model of the primary, situational and sample ward placements (hopefully will be up and running properly within the month). But I do still need to finish digitizing the whole map (to include the dire-side). Future Tara’s problem.


Yes. This is an extremely superficial application of a tiny bit of archaeological theory and method to gaming (and admittedly of a technique which is already borrowed from out of discipline already). But despite its superficial nature it has produced a set of results which are intensely useful for a multitude of players and has fundamentally changed the way which I interact with game-landscapes. The project by no means represents a end-point or even a real recursivity between gaming and archaeology, but rather represents a small token effort towards showing that there are grounds for producing interesting results between the disciplines in a way that is not a simple one-way-street.

Post-ward analysis victories. They called me the backpack because I carried so hard.

Post-ward analysis victories. They called me the backpack because I carried so hard.

Anyway… Enough waffling! Time to go play DotA2. This time armed with knowledge. And as we all know – knowing is half the battle.

As always if you want to know more just hit me with a message and ill be happy to talk it up :)

Outerra – A quick experiment with unidirectional viewshed timelapses

My last post was a quick overview of some of the possible applications of Outerra for visualizing landscapes and experiencing views over time. Today I got an unexpected free afternoon so spent it having another play around – this time visiting one of my favorite places in my homeland – Aoraki, New Zealand.

I chose Aoraki for a couple of reasons:

  1. Having worked in snowy areas before I was shocked by how ground reflection and sun-intensity can impact on how truly visible parts of the landscape are – outside of early morning, and dusk it can be extremely difficult to differentiate elevations or surfaces from each other.
  2. Fog or mist can have a huge impact on depth perceptions, visibility and feeling on the landscape and the places I have felt this impact the most have been in extreme mountainous regions, or extremely flat regions.
  3. Its a wickedly pretty place.

So! Here’s the video:

And… Here’s some screen-caps:

Early morning, Spring, Low Fog. Looking from ground back up towards Aoraki.

Early morning, Spring, Low Fog. Looking from ground back up towards Aoraki.

Low fog, Summer. Looking from ground back towards Aoraki.

Late Morning, Spring, Low Fog. Looking from ground back towards Aoraki.


Late Morning, Winter, Low Fog. Looking from ground back towards Aoraki.


Late Afternoon, Winter, Low Fog. Looking from ground back towards Aoraki.

No fog

Late morning, Spring, No fog. Looking from top of Aoraki.

Fog in spring

Late Morning, Spring, High Fog. Looking from top of Aoraki.


Late Morning, Winter, High Fog. Looking from top of Aoraki. 

And… Here’s some thoughts:

  • Summer, no fog: The water to the top left is clearly apparent in the sun-rise and sun-sets, but has a similar reflective index to the snow caps during the midday.
  • Spring, with fog: Completely obliterates view to water. Reduced sense of directionality.
  • Spring, with fog: flattened landscape
  • Time of day had a huge impact on what is highlighted or hidden
  • Sun intensity has a strong interaction with fog in determining visibilities
  • Visibility has an impact on how a route / environment feels
  • Time of day has an impact on how a route / environment feels
  • Time of day, time of year, climate factors all have an impact on the elements that are highlighted or diminished, which subsequently shapes how you feel and interact with the environment.

So again, couple of very cursory images that start to pull apart empirical visibility vs experienced visibility. I continue to be amazed and inspired by the potential of Outerra – give it a few more years to mature and who knows what will be possible. Tomorows project is to map in vectors of least-cost paths as roads and see how the predicted pathing matches up with the views and experience of the landscape. Fun stuff.

Oh! The Places You’ll Go! (In Outerra)

Today was my day, I was off to great places. The DEM was ported, and the terrain populated. But as soon as it started, work came to a halt. And pesky geo-referencing was completely at fault.

For in Unity a problem exists, that try as you might, geo data is dismissed. Of course arbitrary attributes could be assigned, but doing so would be poor archaeological design.

Nihilism set in and I was left in the lurch, til I found a pretty nifty piece of research. It’s an open-world engine by the name of Outerra, and you can go anywhere in it, Africa, America, even the French Rivera.

Im all out of rhymes so it’s time to move up, up and away, to how Outerra could be used for archeo-vis in quite a big way.

Quick Context: Outerra

Outerra is a 3D engine that progressively renders world-data from space to the surface. The world is procedurally-generated and leverages elevation (arbitrary or varied resolution) and climate data (refined through parameterized fractal refinement algorithms) to render 3D visualisations of the world which are dynamically textured using predefined materials and attributes.

As a user you can jump in and modify the fractals – manually assigning where climate transitions, vegetation, land type or textures begin and end.  Additionally you can overlay bitmaps or incorporate vector data, and if you so desire, set the fractal processes to degrade defined areas, allowing you to create artificial areas and observe them returning to the ‘natural’ over time. 

If that isn’t enough the engine can use real-time atmospheric modelling – which becomes important in rendering how altitude effects perception, vision and distortion. If you are the type who enjoys playing god you can also change time of day, intensity of sun (to reflect seasons) and even implement different fog-types.

In short. It is absolutely mind-blowing. And that’s before you take into consideration the growing library of assets, the ability to place models, the vehicle, ship and aircraft physics, the sounds, and the ability to visualise the world in the Rift.

 Babbys first steps in Outerra: Visualising Rapa Nui

Some parts are stunningly beautiful.

Some parts are stunningly beautiful.

Some parts have near vertical cliffs and more jaggedy rocks than you could possibly imagine.

Some parts have near vertical cliffs and more jaggedy rocks than you could possibly imagine.

The archaeological project that has had the most profound impact on me was the Rapa Nui agriculture, archaeology and ecology survey which I assisted on in 2011/12. The place is by far one of the most impressive and expressive landscapes – at times unforgiving and brutally rugged, and at others an idyllic pacific island paradise (your interpretation of which can change in an instant depending on the exceptionally variable weather). Whilst on the island I saw first-hand how time of day, weather, temperature and altitude can have a massive impact on how the landscape presented itself visibly, and subsequently on how you navigated, thought about and embodied the landscape. Conventional, top-down viewsheds and cost-surface analysis have difficulty in expressing these factors, often presenting static and arbitrarily confined outcomes, which may have significant merit in quantification, but are sorely lacking in their correlation to an embodied reality.

So, my first steps into Outerra were predicated on visiting locations that had been part of my 2011/12 Rapa Nui archaeological experience and seeing what sort of proxies, experiences and tangible outcomes were possible at a basic level.

Using GIS co-ordinates of environmental base stations I began navigating my way around parts of the island whilst playing around with the basic environment settings (time of day, light saturation, reflection index). And to be honest, the results do a far better job than my words ever could of demonstrating how nifty Outerra really is.

Comparing Outerra visualisation (left) and site overview photo (right).

Comparing Outerra visualisation (left) and site overview photo (right).

Comparison of times of day: Morning (Left). Midday (Middle). Evening (Right).

Comparison of times of day: Morning (Left). Midday (Middle). Evening (Right).

The implications?

So I literally had about an hour playing around and figuring out the basics, and as the results above show, there is an ability to very quickly generate reasonably accurate visualisations, and get a feeling for how a particular landscape looks. Being able to play around with time of day, time of year and lighting intensities allows for the user to observe how different lighting can influence the view and feeling of a site. Whilst I produced static images here you experience time in Outerra, so you can actively watch the sun rise and set, follow the shadows on the ground and really get a sense of the temporal – landscape interaction.

From this very, very, brief and basic interaction it is apparent that Outerra has an absolute ton of potential for archaeological applications, and its mind blowing to think that I didn’t even scratch the surface. Already its shown that there is the potential to:

  • Actively participate in a landscape
  • Interact with and observe temporal factors within the space
  • Correlate site data to the virtual world
  • Experience how visibility, sense and feeling can be influenced by time of day and year

So it seems my original Unity project is temporarily on hold for the next couple of days as I start to explore some of the following potential applications:

  • Importing vector data
  • Creating degredation and weathering
  • Using the Rift to create 360 visualisations
  • Playing around with models, photos and editing environmental factors
  • Continuing to have my mind blown

So… be your name Buxbaum or Bixby or Bray, or Mordecai Ali Van Allen O’Shea. You’re off to Great Places! Today is your day! Your mountain is waiting.
So…get on your way!

Click here to go to Outerra’s website. 

Archaeology. Dear Esther and the Oculus Rift.

I have recently finished playing Dear Esther for a second time. The first time I played, I fell in love with the game. The second time round, as I wiped the tears from my eyes, I professed my undying and unconditional love. This time I played on the Oculus Rift.

Context: Dear Esther.

Dear Esther has been called a first-person walker, an art-game, an embodied experience and everything else between. In a sense it blurs the boundaries between what have been conventionally separate entities: virtual experience, story-telling,  art, and game. The game (if such a term is appropriate) features breath-taking scenery, meaningful musical accompaniment, exploration and intricately woven narratives which combine to create one of the most unsettling, moving and immersive experience in gaming – and this is without immersion technology such as the rift.

The game was originally conceived of back in 2007 as a mod the popular Half Life 2 and the subsequent popularity of the game led it to be published as a major standalone title to critical acclaim in 2012. The initial force behind the project was an AHRC grant to research telepresence – or the sense of being immersed into the virtual environment and storyline to the point that technological assistance is not immediately apparent to the user. In other words, you forget you are accessing the world via a keyboard and mouse, and are simply aware of being a part of the generated world. And for me at least, there was an inescapable sense of reality and presence in my original game experience. I can clearly remember the first time I played Dear Esther – the moments of breathlessness, of complete engagement with the world, the moments where I stopped being me, and became a projection of myself, engrossed in this incredible, beautiful, emotional world.

The parallels between, and applicability of Dear Ester to archaeological inquiry has been re-hashed to death (See: Here, Here, and Here) so I will avoid too much detail on the topic here, however it is worth noting that there are, even on an elementary level, a number of points that could be extracted and applied into culture heritage management and museum display – namely the seamless integration of narrative, place, music and object.

Context: Oculus Rift.


The Oculus Rift

The Rift is a head-mounted stereoscopic display unit which allows the user to perceive the world in 360° 3D. Head-trackers allow for the world to rotate as the user rotates their head allowing for unparalleled positional tracking and environment immersion.

Bringing it together.

Dear Esther is already an incredibly immersive experience, but playing with the Rift was a world to its own. The first thing that really struck me was the scale of the landscape, insurmountably expansive, to the point of nihilism. Loading in and looking up towards the cliffs and towers the journey ahead took on new meaning, by bringing the scale into 3D it became personal, and in many ways I began to embody the landscape – beginning to ascribe meaning and emotion to every step I had taken and every step I was yet to take.

In a similar vein the sense of place was heightened – spaces had feeling, meaning and emotion. The houses were no longer simply places to explore, but were claustrophobic, uncomfortable spaces filled with uneasy memories. Caves became expansive, confusing and fearful places – to the point that I experienced my only true panic attack in a game. I walked to a internal cliff-face that backed onto a plunge-pool, admiring how the Rift really accentuated that feeling of height, it helped me give meaning to the distance, it made me wary. As I moved off my model clipped over the edge and I plummeted towards the pool of water. I was aware that I was in a virtual space, but my physiological response was as though I had slipped off that cliff myself. Body braced, every muscle contracting in shock and fear, the sharp intake of breath before hitting the surface of the water. The experience really drove home how different superficially engaging with an experience, and really living it can be – even if that living only takes two of the senses and takes place in a virtual world.

Archaeological Applications.

Within current archaeological discourse there is a fair amount of back and forth regarding the implementation of Geographic Information Systems (GIS), with commentators from post-processual and phenomenological schools voicing concerns that prevailing methodologies reduce human agency and culture to a deterministic function of the environment. Even abstract models that take a cognitive approach end up having to utilise environmental proxies, or become subject to their own rule based determinism.

So where does that leave us with regards to Dear Esther and the Rift? Well, imagine for a second that on a preliminary level you could utilise digital elevation models  as the basis for your world, populate it with trees and environmental assets as necessary and then set about experiencing the world. Such a model would allow the popular GIS methods of view-shed, site-catchment and least cost path to be experienced and embodied.

On a totally different level the ability to create embodied narratives such as that displayed in Dear Esther has intriguing potential as educational and display tools for consumption both online, or on a museum / heritage site. I personally think there is something pretty seductive about the potential to take the objects out from the cases, put them into their original contexts and allow the user to embody the experience in a multifaceted way.

The Future.

The Rift experience has inspired me to start porting GIS data into the game engine Unity and set about reconstructing an archaeological landscape. So, let me know what you think and stay tuned for updates on my experiments into Archaeology + Oculus Rift.

Value oriented object histories in games: Or how Animal Crossing: New Leaf challenged my inner archaeologist.

Recent discussions in one of my Masters papers have sparked some inner reflection regarding treatment of the archaeological image in public media. This reflection explicitly revolving around how public media (TV, games, movies) challenges, shapes or reinforces ideas of archaeology and archaeological principles – for better and for worse.

Overwhelmingly the tone from the in-class discussion was positive: that being in the public eye, and generating public engagement was more important than the form which that engagement took. To an extent reflecting the adage that all publicity is good publicity… But I am a sceptic, and as such, would argue that the portrayal of archaeology can, and does have ongoing implications for the expectations we set for public engagement – as it seems difficult, if not impossible to expect those outside the discipline to construct meaningful and ongoing interactions if there is no platform to facilitate or promote this.

I base my rather more cynical view off my personal experience of how pervasive the ‘treasure hunting – profiteering’ paradigm can be, as I for one am guilty of buying into it, albeit subconsciously and unwillingly. Having playing Animal Crossing: New Leaf on my Nintendo 3DS for a number of months I can hand on heart say that I never realised how I was treating archaeological material in a virtual space.


The tools of the virtual archaeologist: A shovel and mummification wraps, acquired from a ‘friends house’.

So now more on the incident:

In the game you play as the Mayor of a town, responsible for its cultural, economic and social growth. To achieve this you interact with the towns-people and sell goods you gather to generate an income which facilitates the growth of your mini-empire.

The night following the CHM discussions I was playing when it hit me how I had been complicit in buying into the treatment of heritage as a game, a value asset and a tangible commodity, essentially all the things I argued with reasonable vhenemancy against during the discussions. The situation went thus:

Digging in the ground, found some fossils and objects, took them to be appraised, was informed that because they were so rare that they were extremely valuable, thought to myself “that’s good as I’m trying to finance my new house extension”, took said valuable paleontological and archaeological remains to the local exchange store, sold them for copious profit, laughed all the way to the bank. Wasn’t until later that I realised I had essentially aided virtual treasure hunting and proliferated the black-market with tangible historic assets. Moreover, the game itself had set up the preface, and executed this in a manner which I did not question. I didn’t engage with whether that history had importance. I plundered it. And sold it for personal gain. And it was a rewarding and enjoyable experience. And that’s when the internal quandary and slight guilt set in.


Its old. Therefore, its worth cash money. Ol’ Lyle knows. Now go, find that archaeology to aquire mad profits.

This whole saga got me thinking about the proxy to the real world – of how through media we educate, normalise and facilitate this odd interaction (treasure hunting, quantified valued objects, self interest) with archaeology, whilst dressing up more altruistic establishments and aims under reasonably intangible guises ‘cultural growth’. The presentation of archaeology and the establishment of museums is presented as value based and optional, whilst profiteering and treasure hunting are actively promoted by the system. Claim all you want that this is fantasy, but it’s a fantasy that’s easy to buy into without even knowing. And it’s fantasy that has very real ongoing implications for how we frame and let people access our discipline. In an age focused around consumerism and personal interest it seems that relying on altruistic motives and decency to promote ‘proper heritage management’ may be an increasingly difficult position to maintain, especially in light of the continuous reinforcement, and ease of access to the opposing side.



And thus begs the question: how do we take an active stance to challenge this? How should this be changed, if it should at all? Or should we simply continue to participate in this childishly innocent treasure-hunting charade, laughing all the way to the virtual bank as we actively participate in neglecting, abusing and profiting off the very thing we as archaeologists place significance upon in the real world.

Personally I think its never too late to challenge the status quo, turn the tide, or even to leverage off these shortcomings to make a glorious comeback… As this DotA 2 clip demonstrates: (for those of you not into dota a short synopsis will be provided here: Team 1 is getting de_stroyed, Team 1 make a game changing play, Make a comeback, Get denied by Team 2, Team 2 return deny Team 1, Team 1 come back again to win against all the odds, with the worst early-game in history… If we were to give a real world proxy for what “feed early-game, win late-game” means it would go something like: screw up really badly to start with, create a monster, defeat the monster, win at everything):

It’s pretty clear that I sit on the side of the fence which thinks archaeological presentation in games and the wider media needs an earth-shaker style chaos dunk to the face to turn the tide for the better. What do you think? Is this an actual issue? or are we creating an issue where none really exists? If it is a real issue how could we mitigate it?