Could decentralised knowledge graphs be key to AI safety?

Dexter Awoyemi
2 min readMay 6, 2023
Rare photo of my thoughts

Knowledge graphs intrigue me because they bring “neuro-symbolic” reasoning to automated systems (including GPT).

I oversimplify this as combining the benefits of probability-based AI with specific knowledge.

I explored knowledge graphs (and decentralised knowledge graphs) in more detail a couple of months ago in my Open Data Substack post.

This week, I was surprised and delighted to came across a ChatGPT plugin for Golden, a DKG I didn’t even know existed 😅

BUT the prompting doesn’t work as expected.

Instead, I got generic responses and also had to disable other plugins.

I don’t know much about TikTok, but I hoped to learn more.

A bit disappointing. The current limitations seem to be:

• the probabilistic nature of the model

• the plugins don’t use GPT-4

• it’s still a work in progress

• my prompt fu needs work

Still, it got me thinking about the relationship between knowledge graphs and AI safety.

They’d help us build more guidable AI models, less prone to hallucination and better at generalising.

But there are risks.

Bad actors could deliberately feed misleading data to these systems.

The rest of us could unintentionally do the same thing.

The good news is that a decentralised knowledge graph has specific advantages due to the incentives built into blockchain systems:

• No one entity would be able to control it

• It’s distributed, immutable and secure

The potential for AI to augment our reality is incredible, but so are the dangers.

The potential for blockchain to monetise open systems that enable collaboration is also real.

Is anyone else getting even more obsessed with this thing we call knowledge? Or just me?

--

--