Skip to content

Home / Journal / XRchiving @KCL

XRchiving @KCL
by Andy Corrigan

XRchiving – “Where heritage, data and immersive technology meet.”

XRchiving is an initiative from King’s College London, inspired by the advancing pace of innovation around extended, virtual, augmented, and mixed reality technologies (XR) that are used in digital heritage, place-shaping, and storytelling. This event, its second iteration, aimed to showcase, discuss, and encourage collaboration from the galleries, libraries, archives, and museums (GLAM) sector, universities, local communities, public authorities, and architects/designers. Here I’ve tried to summarise some of my key takeaways from the event, many of which have reinforced and invigorated some of my own thinking around my AHRC-RLUK Professional Practice Fellowship.

Setting the scene

Michael Mainelli, the Lord Mayor of the City of London, set the scene for the days discussions with his keynote address entitled ‘History refined by technology: connecting to prosper in the World’s Coffee House’. I find a certain comfort in things that start their journey by looking back to inform forward thinking, there’s something unsettling about attempting to discuss the future without thinking about how we’ve gotten where we are. I was pleasantly surprised at Mainelli’s enthusiasm for embracing psychogeography and flânerie, which he elegantly linked to his mayoral theme “connect to prosper”. It was also refreshing to hear someone stress the economic benefits the GLAM sector contributes, as he seeks to celebrate the breadth of what he calls ‘knowledge miles’, the density of skill and knowledge, that exists within a two-mile radius of the City of London.

In a 2013 article for the Paris Review, ‘In praise of the flaneur’, Bijan Stephen asked “as we grow inexorably busier—due in large part to the influence of technology—might flânerie be due for a revival?”. Responding to this, Mainelli suggested that the practice is far from consigned to the past and could play a prominent role in how we embrace technology in the future. XR, he said, could be the thing that turns flânerie and technology from opposing forces “into concert with one another.” Contemplating this, he stressed the importance of considering how people can take quality time to experience the things that are happening, as well as for events like this one to focus on cultural collections and data. This is an intrinsic part of the ethics that we should consider, which is why things like the London Charter are so important.

Charting our Principles

Drew Baker (a founding author, now at the Cyprus University of Technology) stresses that The London Charter has not been reviewed since 2009. Technologically speaking, that was a long time ago – as these stark reminders illustrated:

  • Only 50% of the UK had broadband.
  • Barely anyone had an I-Phone.
  • Social media was in its infancy.

Access wasn’t ubiquitous, and the tools and services weren’t so advanced, but computer graphics were becoming more important, they were becoming a tool in themselves as well as just illustrating things. However, there were a lot of people experimenting and there were a lot of low-quality outputs. So, The London Charter was a response to the need to surface, evidence and promote the academic input and value, to demonstrate the “how” and the “why”. One of the key concepts the Charter envisaged would use to achieve this, is through paradata – the data about the processes, analysis and decisions involved in digital cultural heritage.

I’m familiar with this concept, but my own experiences lead me to question whether we’ve lost sight of the importance of paradata. Its use seems far from universal, and it is something that I have felt for a long time that we could be doing better through the Cambridge Digital Library platform. I’m not so sure of why this is, it’s possible that the creation of paradata isn’t always simple, or its incorporation perhaps viewed as less important than the Charter envisaged? A quick Google out of curiosity might suggest this is the case – the first search result being a site, which although related to aspects of cultural history, turns out to be solely related to data about the history of the UKs Parachute Regiment and Airborne Forces. The second search result is more promisingly a Wikipedia page, although the page is focussed more on paradata as information about undertaking a survey, questionnaire or interview. The page’s “talk” page reveals that perhaps I’m not the only one seeking some relevance to computer visualisations of cultural heritage, so perhaps I should revisit the page and work on updating it?

I digress, and meanwhile Baker has reminded everyone of how much things have changed and all the things we’ve gotten really good at:

  • Creating digital tools and content.
  • Using quantitative methods to analyse.
  • Supporting personal engagement. But he also draws our attention to some of the remaining gaps as he proposes why a review of the Charter is due:
  • We have yet to master the intangible.
  • The importance humanity affords culture is fragile.
  • The World is unstable, there are problems and safeguarding is challenging.
  • We need to remember the importance of being able to connect and understand heritage, so that the world can understand and navigate its differences.

As the discussion moves into some of the principles that might inform the next version of the Charter, FAIR and CARE for example, Fridolin Wild (Open University), highlights that “we are living through a frontier change where it is becoming possible to generate experience”. Technology isn’t ready yet and developments are fast paced, but FAIR and CARE Principles allow us to start democratising participation. Ronald Haynes (University of Cambridge) comments that this will be a key aspect of the re-write; the language needs updating so that users are no longer passive, but active participants who are involved and invested. As the discussion draws to a close, Drew stresses that there is no point doing what we do without FAIR/CARE, and Fridolin cautions that there’s a lot that’s “not” happening.

During the coffee break, prompted by the discussion around paradata, I had an interesting discussion with a lady about the importance of capturing stories of process. As I was discussing some of the 3D work we have been doing with Darwin’s archive, she drew the comparison in how we create content these days with the importance of the social history that is captured through the translation of Darwin’s work and ideas on evolution for example.

Data underpins the future

There is, quite rightly, a lot to also discuss around the digital preservation of cultural heritage. Again, there’s a worrying amount that’s “not” happening, as Jane Winters (University of London, School of Advanced Study & Digital Preservation Coalition) puts her finger firmly on the issue: “It’s not the most exciting, but it is the most important.” George Oates (Flickr pioneer) gives us a snapshot of some of the other issues at play – FLIKR, for example, is such a big platform with a huge amount of visible data as well as paradata. However, the user interface is constantly changing, making it almost impossible to capture, so they’re focus is on that data. Cathy Williams (King’s College London) mentions another challenge, which is that everyone in different sectors or scales of institution is operating at different speeds with differing levels of resources. This is something I’m not sure is discussed often enough and is something I’ve also experienced in the parts of the pipeline that are generating content as well as those preserving it. But the effects for digital preservation are perhaps compounded by the relatively recent attention it is being afforded after suffering much neglect.

The panel go on to discuss some of the fundamental differences between digital and traditional archiving, many of which stem from incompatibility. Traditional archives are generally flat, in both form and structure, but digital archives are complex and exist in a “networked order” rather than a chronological one. This flatness has informed user interfaces as we’ve simply sought to replicate what is traditionally offered, only online rather than in person. Another factor revisits the issue of pace – cultural history is now being poured into social media and traditional methods and workflows, not to mention our ability to change them, can’t match that pace. The conversation has only really just started… One thing I’ve noticed recently is that with all the jostling to get to grips with AI, attention is held by its spotlight. It is shiny and new and exciting. I wonder how much faster digital preservation would progress if it was to shine as bright in such clamour?

Twin-it!

3D digital content of cultural heritage is getting easier, quicker, and cheaper to produce all the time, but it is still lagging far behind flat digitisation. Some of the challenges being discussed can’t be solved until we have a larger dataset of examples, and a greater understanding of that. Trying to address this, Europeana are spearheading a campaign, Twin It!, calling for European institutions to contribute more 3D content with the goal of “twinning” physical cultural assets with digital ones – will this mark a turning point? I’m not convinced but every little helps, and the scale of it alone is full of potential.

Creative abstraction

During the breakout workshop session, I attended a lively discussion about abstraction that was facilitated by Brian Schwab (Director of Creative Play Group at LEGO Group). Schwab shared with us his experience of using abstraction to better understand datasets. He used the analogy of comparing how we have grown to understand Stonehenge as an ever more complex site with multiple points of view to unpick. One method is ‘Agent-Based Modelling’ - computer simulations used to study the interactions between people, things, places, and time. These processes often reveal and point to new information, ideas and connections. This process of abstraction struck a chord with my experience working in archaeology as there are many similarities with how the analysis of archaeological sites is undertaken. Rather than trying to compare every piece of knowledge, every object, every time, and every place all together at once, distinct features can be compared in order to build an understanding of the site in its own context.

StoryTrails

Amanda Murphy & Helen Scarlett O’Neill, (Royal Holloway) also ran a workshop. Whilst I didn’t attend, their presentation suggests the project would be well worth exploring further, particularly in the context of the ‘Walking with Constable’ project I have been working on. StoryTrails uses augmented reality to put virtual objects in public spaces to tell history where it happened. The project uses psychogeography, maps and trails to undertake digital “imprinting” on the physical. They explored questions such as:

  • Why integrate the digital with the physical?
  • What purpose does it serve?
  • What does it mean?

GLAM in 2034

The panel attempted to tackle a huge topic with brevity, and there was a lot of food for thought and several food related analogies! In contemplating if collaboration will increase, Geoff Browell (Digital Archivist at King’s College London) summarises that we need more examples, more people doing things, building relationships and friendships [and that’s important!]. The more examples we have, the more sustainable it will all become – coming together around common standards, figuring it out. We shouldn’t preserve everything … we can’t!

Why are libraries so important? – Another challenging question was taken on by Helen O’Neill (StoryTrails/Royal Holloway). Libraries were one of the first places that enabled people to access the internet. They’re not just places to provide basic breadline services, they are about making sure everyone can participate in possible public futures. They remain the place people can discover things, a place everyone should feel welcome to participate, share, and relate to things.

A collage of photos; a psychogeography of this post

End Note. [Or is it a beginning?!]

This conversation between Tam McDonald (Cradle of English & XRchiving Limited) and Dr Jamil El-Imad (Imperial College London), was an inspiring and invigorating way to top off an excellent day of discussions. I’ve tried to capture a sense of the conversation below:

Tam asked Jamil to reflect:

Do immersive technologies make us smarter, or do they at least keep us from getting stupider? What are the Neurological processes of immersive content?

A former computer programmer turned neuroscientist, Jamil is also CEO of The Brain Forum and began by explaining why he got involved:

“We still don’t know how [the brain] works; we don’t have a theory. We have a space theory, but we don’t have a brain theory. We don’t know the signalling language of the brain, we don’t know the language of thought, and this excited me. Scientists work in silos; they don’t talk to each other. So, we wanted to bring people together through a forum for anyone interested in the how the brain works.”

How much are we effected by analysis, and how much by emotion?
  • The brain uses less energy than a 40w light bulb but has 83 billion processors. We cannot simulate or replicate that.
  • We are far far away from understanding the orchestra that operates in our head permanently day and night.
  • One thing all scientists agree on is “Neuroplasticity” – the ability to rewire ourselves, and this is the magic of what makes us individual and human. This is why mental training leads to better results and improves resilience.
How much is Neuroplasticity driven by things like cognitive behavioural therapy and talking theories and how much is down to the extent we get out into the world and work with other people and have social interactions?
  • Both are important but mindfulness and focus are key. If you know how to focus, then you can focus on positive thinking.
  • 50% of the time, our mind is wandering, it’s not where we want it to be. So if you can control it more, you can improve yourself. Like the way yoga works for example, mindfulness helps to achieve focus. But it takes a long time, so how can we speed up the training? So, we made the Dream Machine.
  • The way neuroplasticity actually works, is that experience changes your wiring and VR/XR changes your experience…

“It’s been really interesting and inspiring today, because I’ve never looked at cultural heritage and VR from that perspective.”

What stood out, in terms of the effect on the brain and human cognition?
  • What the Lord Mayor said about psychogeography this morning is so so important to cognitive processes. Places bring emotions and connections. Going to a library brings emotions. So, digitising all these archives and leaving them in cyberspace does not give you the same connection as going to a library, looking around and connecting to people.
  • VR/XR is not a substitute it’s a complimentary way of consuming this information. We still need the other choices we have; this is important to remember. Work on XR/VR should not negate other work, it should be an add on.
  • These experiences are a journey, this is not a destination, we need to learn, we need to embrace the change but work with it carefully and not rush into things, evolve the thinking, and take it one step at a time.
  • Metadata is one of the biggest problems, if you don’t have metadata, you don’t have context. That might be where AI can come in to help.
  • We cannot assume that everything on FLIKR is valuable enough to archive, - how do we decide? We need to be mindful about that.
  • The way we talk about these things has changed over the last 40 years:
    • In the 70’s we used the term “data”
    • The 80’s and 90’s, “information”
    • In the 2000’s it was all about “knowledge”
    • Now, in the 2020’s we talk about “experience” – this is what XR brings!

This post has been funded by the AHRC-RLUK Professional Practice Fellowship Scheme for research and academic libraries.