
Reflections from the the Humanitarian Networks and Partnerships Week
Last week, Helen and I attended the Humanitarian Networks and Partnerships Week. We went there because learning and the use of knowledge had a prominent place on the agenda. It’s a strange time to be in Geneva gathering with humanitarians while all that is happening in the world is going on. Nevertheless, I believe these get-togethers are important. My mission of the week: have as many conversations as I can about how knowledge gets connected to decisions in the humanitarian sector.
The backdrop was not new: funding cuts, the humanitarian reset, pressure to do more with less. But underneath the operational urgency, there was a thread running through almost every session I attended. We have built a sector that generates enormous amounts of knowledge, and we have not built the infrastructure to hold it. And I keep asking myself, what would it look like if we were serious about using the knowledge available to us? We don’t know what we don’t know.
Juliet from ALNAP put it plainly: we are rich with experience. We know more now than we did twenty years ago about what works. The evidence exists. The problem is not a shortage of knowledge, it is what happens to it. Another point I keep coming across is a shifting sense of what is considered evidence, inside and outside of the conference. For a long time, everything has revolved around quantitative data. Potentially because of the intimidating character of qualitative data. The sector is already under enormous pressure. Nobody has time to also think about reading hundreds of pages and conducting rigorous qualitative data analysis. Yet, that’s where the knowledge sits that might unlock patterns about what works we aren't even aware of.
Reminded of USAID’s shutdown and the resources disappearing with it, a comment that stood out to me was the image of a website full of years of accumulated humanitarian learning, bought by a casino after the organisation shut down. The domain changed hands, and the knowledge was just gone. It sounds almost absurd, but it’s as easy as that for years of humanitarian knowledge and experience about what worked and what didn’t can get wiped out. Luckily, just as the knowledge rescue team for USAID, a team of practitioners put their heads together to save that critical knowledge.
What struck me about this year's sessions, and this may be confirmation bias to some extent, considering the sessions to which I signed up, was how many people named the same structural problem from different angles. A panellist on value for money talked about the difficulty of justifying investment in learning when delivery is seen as the real work. Someone pushed back on that framing, and rightly so: you cannot have good delivery without collective intelligence. That false dilemma, learning versus doing, is one the sector keeps reproducing, and it is costing us.
The UK Humanitarian Hub and Elrha held a session on tacit knowledge. The session was one I kept thinking about afterwards. The question Elrha started with was a good one: how does tacit knowledge travel from daily decisions to strategic ones? What I heard in the room was that tacit knowledge is often more current, more grounded, and more useful than formal systems capture, but it requires very different conditions to surface. It does not come out through questionnaires. It comes out through spaces, through storytelling, through role-play, through the kind of trust that takes time. One line I wrote down was this: "Not every insight needs to become universal knowledge." There is something important in that. The drive to systematise and scale everything can strip knowledge of the context that made it useful in the first place. While considering different sources of knowledge and combining them will give us actual crowd-sourced knowledge about what works, where and how. We’re always on the lookout for that one piece that will solve it all, that one best practice that will be the red thread and universal answer. Maybe it’s not that. Maybe it’s combining different pieces of knowledge and knowing into the picture to see what can work where.
There was also a moment in one of the learning sessions that I keep coming back to. Someone asked what intuition actually is. We’re all guided by mental models that we just take for granted. Often, due to time constraints, we just work based on what we assume we know. There may be time to quickly call up a colleague and ask what they know.. But in order to do that, you have to first of all know who to call and secondly, you only get part of the picture from that person’s experience. The question was answered in this session that intuition is not something mystical: it is accumulated experience, pattern recognition built over time. The point was that we should be making sure humanitarians have access to the knowledge that lets that intuition develop properly, collectively, not just individually. Indigenous knowledge came up here, too. Someone mentioned the example of flood prediction through the movement of ants or the changing calls of monkeys. What if that kind of knowledge were actually integrated into response systems rather than treated as anecdotal?
The locally-led learning session was set up in a different way. Organisations working in Somalia, Tanzania, and Ukraine, talking through how they had built learning into implementation from the start, not as a reporting requirement but as something that actually shaped decisions. They were tasked to describe who has learned something, and how it is visible. While it seems simple, most projects do not have it baked into their ways of doing things yet, and monitoring systems do not ask it.
I came away from the week with a few things to sit with. While I feel I sort of got closer to answering how knowledge is actually used to inform decisions, I also felt like I was moving further away from it. It’s great to see that knowledge is getting a more prominent position, we are going back to seeing it as the public good that it is, as the backbone of all humanitarian and development action. Nevertheless, what knowledge is collected, synthesised and then used is another question. Additionally, we claim to have strengthened accountability through quantitative indicators over the last decades, yet the urgent dimension of knowledge loss is not receiving as much attention as I’d argue it should. It is not just about effectiveness, about learning what works. When we lose the record of what happened, we also lose the ability to be accountable for it. Someone said that “neglect is a form of violence”. The knowledge you choose to ignore, or fail to preserve, is a choice with consequences.
The other thing I am thinking about is infrastructure. Not tools, necessarily, though that is part of it, and it is part of why we are developing Propel. But more broadly: the funding models, the time, the organisational cultures that would need to change for knowledge to actually be treated as a public good. That phrase came up, that humanitarian knowledge is a public good, and I think it is right. It just has not been resourced like one. If collective intelligence about what can work to tackle climate change, strengthen resilience, fight hunger and collaboratively provide life-saving aid, we have a responsibility to put it to use.