Frank van Harmelen: Hybrid Human-Machine Intelligence for the AI Age – Episode 33

22/05/2025 29 min

Listen "Frank van Harmelen: Hybrid Human-Machine Intelligence for the AI Age – Episode 33"

Episode Synopsis

Frank van Harmelen

Much of the conversation around AI architectures lately is about neuro-symbolic systems that combine neural-network learning tech like LLMs and symbolic AI like knowledge graphs.

Frank van Harmelen's research has followed this path, but he puts all of his AI research in the larger context of how these technical systems can best support people.

While some in the AI world seek to replace humans with machines, Frank focuses on AI systems that collaborate effectively with people.

We talked about:

his role as a professor of AI at the Vrije Universiteit in Amsterdam
how rapid change in the AI world has affected the 10-year, €20-million Hybrid Intelligence Centre research he oversees
the focus of his research on the hybrid combination of human and machine intelligence
how the introduction of conversational interfaces has advance AI-human collaboration
a few of the benefits of hybrid human-AI collaboration
the importance of a shared worldview in any collaborative effort
the role of the psychological concept of "theory of mind" in hybrid human-AI systems
the emergence of neuro-symbolic solutions
how he helps his students see the differences between systems 1 and 2 thinking and its relevance in AI systems
his role in establishing the foundations of the semantic web
the challenges of running a program that spans seven universities and employs dozens of faculty and PhD students
some examples of use cases for hybrid AI-human systems
his take on agentic AI, and the importance of humans in agent systems
some classic research on multi-agent computer systems
the four research challenges - collaboration, adaptation, responsibility, and explainability - they are tackling in their hybrid intelligence research
his take on the different approaches to AI in Europe, the US, and China
the matrix structure he uses to allocate people and resources to three key research areas: problems, solutions, and evaluation
his belief that "AI is there to collaborate with people and not to replace us"

Frank's bio
Since 2000 Frank van Harmelen has played a leading role in the development of the Semantic Web. He is a co-designer of the Web Ontology Language OWL, which has become a worldwide standard. He co-authored the first academic textbook of the field, and was one of the architects of Sesame, an RDF storage and retrieval engine, which is in wide academic and industrial use. This work received the 10-year impact award at the International Semantic Web Conference. Linked Open Data and Knowledge Graphs are important spin-offs from this work.

Since 2020, Frank is is scientific director of the Hybrid Intelligence Centre, where 50 PhD students and as many faculty members from 7 Dutch universities investigate AI systems that collaborate with people instead of replacing them.

The large scale of modern knowledge graphs that contain hundreds of millions of entities and relationships (made possible partly by the work of Van Harmelen and his team) opened the door to combine these symbolic knowledge representations with machine learning. Since 2018, Frank has pivoted his research group from purely symbolic Knowledge Representation to Neuro-Symbolic forms of AI.
Connect with Frank online

Hybrid Intelligence Centre

Video
Here’s the video version of our conversation:

https://youtu.be/ox20_l67R7I
Podcast intro transcript
This is the Knowledge Graph Insights podcast, episode number 33. As the AI landscape has evolved over the past few years, hybrid architectures that combine LLMs, knowledge graphs, and other AI technology have become the norm. Frank van Harmelen argues that the ultimate hybrid system must also include humans. He's running a 10-year, €20 million research program in the Netherlands to explore exactly this. His Hybrid Intelligence Centre investigates AI systems that collaborate with people instead of replacing them.
Interview transcript
Larry:
Hi, everyone. Welcome to episode number 33 of the Knowledge Graph Insights podcast. I am really delighted today to welcome to the show Frank van Harmelen. Frank is a professor of AI at the Vrije Universiteit in Amsterdam, that's the Free University in Amsterdam. He's also the PI of this big program called the Hybrid Intelligence Center, which spans seven Dutch universities, multimillion euro grant over 10 years. Welcome, Frank. Tell the folks a little bit more about what you're up to these days?

Frank:
All right. This Hybrid Intelligence Center occupies me most of the time, and that's been a very exciting ride over the past five years. We're just at the midpoint and we have five more years to go.

Larry:
Nice. How is it going? Are you satisfied? Are the expectations of the grantors being met and are you happy with the progress you're making?

Frank:
Yes. It's obvious to say that the world of AI is super dynamic now. All kinds of things have happened in the past few years in AI that nobody had predicted when we started, the rise of large language models of conversational AI. That has also really affected the notion of hybrid intelligence. It's been an even more exciting ride than we had expected.

Larry:
Yeah. That's right. Yeah. I think excitement is the word of the day. Hey, one thing I have to observe, earlier today before we recorded this, I was doing a presentation with some information architects, and the subject I was talking about hybrid AI architectures and neuro-symbolic loops and all this stuff. One of the people in the presentation asked, "Hey, what about human AI? Shouldn't that be the architecture?" Then I said, "You're going to love my next podcast guest," because that's the whole point of this hybrid intelligence idea, right?

Frank:
Yeah. The core idea of hybrid intelligence, hybrid standing for hybrid combination of human and machine intelligence. Think of hybrid teams, where a hybrid team is made up a bunch of people and a bunch of AIs who collaborate to get a task done. That, if you want, the tagline of the Hybrid Intelligence Center is that we're working on AI that collaborates with people instead of replacing them. If you work on AI systems that collaborate with people, then you certainly need to solve all kinds of different problems and answer all kinds of different questions than where you are thinking about AI in the replacement mode.

Larry:
Yeah. That seems to be, like in a lot of circles, there's this assumption that AI is just here to replace people, but you've been... Long before that was a meme and people talking about it, you were working on this hybrid concept. Has that heightened the urgency around your work, the current state of AI expectations?

Frank:
It has heightened the urgency, and it has also opened all kinds of doors. One of the big hurdles in AI-human collaboration, say five years ago, was really the conversational interface. It was hard to talk to AI systems, and they certainly wouldn't talk back to you in a coherent way. Well, we all know that's now a solved problem. But what happens in the middle is the real challenge. We don't think that the large language models are going to solve all of the collaboration between humans and AI systems. We want our AI systems to do things that the language models are not very good at, but we're using that technique in a kind of sandwich model. Now, the language model does the conversation on the front end, it does the conversation on the back end, and we're working on the AI agents, the smart that's in the middle, to create these hybrid teams.

Larry:
As you say that I'm thinking about that's just one aspect of the hybridization of this. That that's one way that humans... When you think about hybrid architectures, where LLMs can help build knowledge graphs and they can also fill in knowledge gaps in LLM architectures. What other obvious complimentary things are there between... What do humans need help with and what do machines need help with?

Frank:
Right. There are some obvious things like the perfect memory that machines have and the imperfect memory that we have. Okay? That's a nice example of where members in the team can really compensate for each other's strengths and weaknesses. Humans suffer from a whole host of these cognitive biases. For example, we suffer from the recency effect. We believe information more if we've heard it recently rather than when we've heard it in the past. We believe information more when we've heard it more frequently rather than... There's no reason to believe something more if you hear it more often.

Frank:
That doesn't make it more true, but it's how our brain works. Not always such a good idea. Computers can help us to compensate for all of these cognitive limitations. Conversely, we are very much aware of the context in which we operate. We are aware why we are doing something. We are aware of the implicit norms and values that govern the task that we're doing, that we're expected to obey in a particular group to perform a particular task. Computers don't have any sense of why they are doing something, the context in which they're doing it, the social and ethical norms under which they should operate. That's something where the human component can compensate for the machine limitations. These are just a few examples of that complementarity.

Larry:
Yeah. That's one of the things I think about a lot is that what we call in my world stakeholder alignment or stakeholder discovery or working with subject matter experts to make explicit their tacit knowledge in their head and things like that. It seems like that's probably always or mostly going to be a human capability. Is that... You probably have research that backs this up, right?

Frank:
Well, and if you want to collaborate with a computer, then you better make sure that there is some alignment between you and the computer.

More episodes of the podcast Knowledge Graph Insights