Can imagining the worst help us succeed?

Written by Mia RidgeFebruary 25, 2019Comments: 0

You can’t work on a project of this size without worrying about all the things that could go wrong. When I came across the idea of ‘pre-mortems’ – a format designed by Gary Klein to help teams identify risks before a project starts – it seemed like a perfect way to tease out all the ways our project might go awry. As the project team came together gradually, it also seemed like a nice way to learn more about each other’s perspectives.

We’d also been planning to ask colleagues in the wider field of digital humanities or digital scholarship what lessons they’d share from their own experience with large-scale projects, so we decided to combine the two exercises and run pre-mortems after other discussions with those invited experts.

The pre-mortem format involves imagining that you’re looking back on a failed project and asking, ‘what went wrong?’. Each team member writes down possible reasons for that failure. The reasons are then shared and discussed. Sometimes solutions are found on the spot, while others turn into action points or milestones for the project and others go onto our risk register.

We’ve now run two pre-mortems – the first in September 2018 with Professor Melissa Terras, and the second in February 2019 with Dr. Jennifer Edmond, Director of Strategic Projects for the Faculty of Arts, Humanities and Social Sciences at Trinity College Dublin. Jennifer’s keynote at the Digital Humanities Congress in 2018 touched on so many questions of collaboration, infrastructure and usability that were relevant to our project that we knew we’d benefit from her experience with projects like DARIAH and Cendari.

Thinking about reasons our project might fail

As we’re aiming to be quite radically transparent as well as radically collaborative, we thought we’d share a transcription of our post-it notes below (with thanks to Research Project Manager André Piza for typing them out so quickly). From a personal perspective, the themes I drew from the pre-mortem discussion were: collaboration is hard; can we meet our goals re sharing data with others?; are we spending time on the important things?; does the Labs structure serve our wider needs?; fears about silos, gatekeeping and trust; fears that we won’t each get the benefits of learning together/from others; concern that research outputs won’t justify the expenditure and effort.

Discussing the issues raised

Further reading

Klein, Gary. 2007. ‘Performing a Project Premortem’. Harvard Business Review. 1 September 2007.

Tervooren, Tyler. 2013. ‘The Pre-Mortem: A Simple Technique To Save Any Project From Failure’. Riskology (blog). 27 June 2013.

Transcribed post-it notes on ‘how might the project fail’ from the Living with Machines team pre-mortem with Jennifer Edmond, February 19, 2019, as grouped on the day

These notes as transcribed ‘as is’ except where I’ve expanded acronyms or added explanator notes. An interesting sense of overlapping areas of concern emerges from the repetition of some points.

Team management + management of things

  • Ensure everyone can feel valued and be rewarded meaningfully
  • Managing internal/external expectations
  • Balancing level of theory-driven query and source-led investigation
  • Differentiating personal process from epistemic gaps
  • Raise questions without raising antibodies?
  • Misfit between our emphasis, on process, and institutional emphasis on output
  • Overall project team retreats to comfort zones/corners
  • Interdisciplinarity pitfall n. 1: satisfy no single audience; fail to get subject credit
  • That we won’t be radically collaborative
  • That we don’t have the same idea about what radical collaboration means
  • The project isn’t’ as radically new as expected and doesn’t bring to light new ways of collaboration
  • Stop collaborating because it is difficult to meet/navigate the group
  • People give up making the effort to collaborate after feeling a few times that it may not pay off
  • Integration between labs (project level)
  • Labs become a force to divide not unite; leads to fracture
  • Labs becoming tribes/clans (competition >> collaboration)
  • Labs become silos
  • Ensuring complexity leads to richness, not paralysis
  • Not enough understanding of each others’ assumptions about data
  • What activities and how much time does it take for the project team to peak their collaboration effort?
  • There is no proper collaboration between researchers and data scientists/research software engineers
  • Non-London staff get ignored because it is hard to be present leading to loss of input + direction
  • Lack of trust will be an issue
  • There will be too many people in the project to keep track of what work is happening
  • Lack of understanding about creating lawful access to use exceptions [for text and data mining]
  • Turing research engineering staff change too frequently causing disruption
  • Spaces lab becomes standalone project
  • Labs aren’t aligned with delivery plan for funders
  • That the labs will become separate islands with no shared vision
  • Sub-projects don’t connect
  • Simpler methods may not be scalable (computationally) and take forever to run; more complex methods may be too obscure to understand relations and patterns in them
  • Key people will leave before the end
  • Loss of key staff
  • Institutional cultures get in the way of the project establishing a new collaborative culture
  • Overly managed collaboration process prevent research to investigate new questions
  • People don’t finally commit to working in on group space

Sources -> impacts/outputs

  • Overlook some crucial historiographical work that should have known about
  • That it will be very difficult to share any “data” with other resources

Impact + outputs

  • Research findings merely confirm what historians already know. Result: ridicule
  • The project won’t generate any new insights
  • Boring research results after four years of research. Tell me something new asks the historian
  • Produce work that is politically naïve
  • Historical outputs insufficiently novel
  • Research questions are too conservative
  • It will be hard for external researchers to relate to this kind of project (scale/budget)
  • Focus on MRO [minimum research output, an experiment in adapting ‘minimum viable projects’ or MVPs from software development] at root of averting vision
  • No major benefits for the Digital Humanities (DH)/Research community (tools/methodology, etc…)
  • Research communities outside DH will not be interested in the project
  • There is not enough effort in communicating the outputs to the general public
  • The analysis, methods and results we find interesting from a more technical perspective are not relevant from a more humanistic perspective
  • Will project-splain findings + our peers will resent us
  • Negative perception from communities because we cannot be open with our source/data
  • Project can’t convey to external community what it has achieved
  • Tools produced by the project wont’ be used by anyone else
  • No good venues to publish + x-disciplinary outputs
  • Exhibition is under-resourced
  • That the exhibition won’t be linked to research results
  • How “productive” to be?
  • The outputs will not be used by those they are meant for
  • Resulting tools, methods, etc… are extremely slow and a pain to use for actual historical research (Because of too much data)
  • Tooling is too complex/at the wrong level of granularity
  • No one will use any of the outputs from the project


  • Ethics around data: fast moving regulations and perceptions affect findings or reputation of the project
  • Duplication hits from elsewhere rather than co-creating (esp oceanic exchanges EFI)


  • The corpora we use are biased and the research outcomes aren’t interesting
  • Unable to create useful data from resources
  • Faulty data that will invalidate years of research
  • The published outputs will be only record that survives the project
  • Source discussions made at too fine a level leading to wasted energy/resources
  • Methodological blunders in corpus building
  • Text sources have poor OCR and new text extraction becomes a bottleneck
  • Slow arrival of census data slows progress of Time and Space Lab
  • Southern bias in focus, events (???)
  • Access to sources is revoked leading to no research
  • Will self-limit the sources we use


  • Brrrrrrexit (whatever it means)
  • What happens to unspent year 1 funding?
  • Funding collapse post Brexit
  • Running out of money after one year (being on the job market again)
  • Funder expectations not met
  • Contractual arrangements not finalised
  • Moving between Named Entity Recognition to relations and events too hard so not done
  • Rights users “infect” other intellectual property

Personal development

  • I don’t have time to learn the new skills I had hoped for
  • With all the (AHRC) reporting we will get bogged won in paperwork
  • Discipline x person finds they can’t learn discipline because of: time, being pushed away, self-doubt or uninterest
Cite this article as: Mia Ridge, "Can imagining the worst help us succeed?," in Living with Machines, February 25, 2019,

Our Partners

Our Funders