This article highlights the importance of setting clear boundaries between the human side of things and the machine or AI side of things.
To that end, I have created a framework in the form of a simple Glossary/Taxonomy here: https://github.com/sindoc/knowyourai-framework
Official home of the framework is here: https://sindoc.github.io/website/#/page/knowyourai
I have made the glossary available for free in RDF/SKOS format, with a bit of an extension of my own in OWL. There’s also a PDF version, as well as a version in CSV, which you should be able to import easily into the Collibra platform or your Enterprise Glossary of choice. See release notes for more information.
Automatic mapping of AI use cases to risk profiles
By mapping each AI Use Case to one of the terms in the Human-AI Relationships glossary, you will be able to automatically derive the risk profiles associated with a particular use case.
Regulatory Compliance
I use this framework actively in my work in Data & AI Governance and I hope that you too will find it useful. Please let me know your comments below or on Hacker News.
I thought to share my line of thinking, in case you might have the same types of questions I am asking myself, about how to build a solid roadmap that is future-proof in these uncertain times, akin to a transition period for humanity as a whole.
Several types of emotions can arise when faced with understanding the role of AI in our lives.
An individual may be completely unaware of the fact that their lives are being impacted negatively by the AI. What if they find out too late? And if they do, what types of resources do they have available, in order to overcome their fears?
An individual may face helpless in the face of AI, questioning their own intelligence, due to their lack of understanding of technology and what’s really at stake here.
An individual may look to harness the value of AI, while mitigating its risks. This is of course the attitude we try to nurture. We want individuals to feel empowered by AI, but also understand its risks.
The AI revolution has taken a new turn now. It has already caused a paradigm-shift, the same way smart phones did, the same way telephones did, the same way Dirac & Einstein did, the same way trains did, and the list goes, all the way back to fire, in my humble opinion.
Fire was a technology that reshaped our species by radically modifying consciousness4. We still haven’t quite tamed fire, although we have come a long way. We don’t want AI to cause as much pain that fire caused. Could it be as bad as fire? I would say: yes, but it’s unlikely. At least, so I want to believe because I believe that it’s a challenge we can overcome. This too shall pass, if we all do our homework, I guess.
I’m trying to be optimistic here, even though the risks are real. I’ll start with a personal anecdote, which also so you understand why I started writing this article in the first place.
I never gave up on philosophy and history as I was practicing as a technology leader in the field of Data Governance for thirteen years and before that, as a young coder. I suppose that part of me knew that this day would come. I don’t claim at all that I could predict the details. And I don’t think that anyone could have, really. But somehow, those that knew, they knew. Take Bill Joy5 for instance.
I read Bill Joy’s article when I was 18, thanks to my brother and lifetime mentor, who shared it with me as he knew that I was showing interest in entering the world of Computer Science and Artificial Intelligence. I could only truly understand the content of Bill Joy’s article and its implications after taking my first Machine Learning and Artificial Intelligence courses in college. Somehow, I was still surprised when the last AI wave hit with OpenAI’s ChatGPT. I just didn’t expect it to become so widespread so fast. As technologists, we have learned the hard way, not to talk about technology too much. Especially, if you share your life with people who are not tech-savvy.
But now, everyone around me knows about AI. And this is exactly why I think that AI has made an entrance into the real world and no-one can do anything to undo it. There’s no going back from this. It’s out there. All we can do now, is to try to understand it and adapt accordingly. See it something that has the potential to disrupt your life in ways that you could or could not see.
The best example is the job hunt example. When you apply for a job, chances are that your CV is initially rated and ranked using AI algorithms. If you don’t adapt to that, you might never get selected for an interview or at least greatly reduce your chances of getting an interview. There are many other examples, some scarier and some less scary. But you get the point. AI is here. Any sane individual, especially those raising children in this world, should pay clear attention to what’s happening and most importantly, how to take AI into account in day-to-day decision making.
One way to look at the last wave of the AI revolution in this Information Age, is to see it as a new technology that has commoditised intelligence. Every person now walks around with direct access to generalists in just about any field. This is huge.
I can imagine that people would have different reactions to something like AI. It is no longer acceptable that you don’t understand AI. I truly believe that everyone should build a good solid knowledge of AI and its implications on their lives.
As someone who has been in this field for a longtime, I now feel that it’s my responsibility to share my own fears and strategies to overcome those fears with my fellow humans, in hopes that together, we can overcome the challenges ahead, in the face of this first phase of the AI revolution.
As we embark on this journey through the rapidly evolving world of artificial intelligence, one question stands at the forefront: how do we navigate the delicate balance between the unique capabilities of humans and the growing power of machine-driven solutions? The line between the two is becoming increasingly blurred, with AI systems now tackling tasks once thought to be the exclusive domain of human intellect. As we venture deeper into this new frontier, it becomes more important than ever to clearly define what remains inherently human—and what can be entrusted to machines.
In this era of AI transition, understanding the boundaries between human intuition, creativity, and emotional intelligence, versus the efficiency, precision, and data processing power of machines, is essential. This is more than an intellectual challenge; it’s the foundation of how we shape the future. By distinguishing these domains, we ensure that human values and agency remain central to technological progress, guiding us toward a future where both humans and machines can thrive in harmony.
When creating a clear delineation between the “people” side and the “AI” side of things, it’s helpful to consider their roles and responsibilities in various contexts. Here’s a breakdown:
1. Human (People) Side:
• Creativity and Emotional Intelligence: Humans excel in complex, creative tasks that require emotional depth, empathy, and cultural sensitivity.
• Strategic Decision-Making: Humans are responsible for making high-level decisions, formulating long-term goals, and adapting to unpredictable environments.
• Ethics and Values: Human oversight ensures that AI operates within ethical boundaries, considering moral and societal impacts.
• Complex Problem Solving: People are better at abstract reasoning, innovation, and resolving ambiguity in ways machines can’t replicate.
• Personal Interaction: Humans manage social relationships, customer service, and negotiation processes that require human touch and understanding.
2. AI (Artificial Intelligence) Side:
• Data Analysis and Pattern Recognition: AI excels at processing vast amounts of data, identifying patterns, trends, and making predictions based on data analysis.
• Task Automation: AI can automate repetitive tasks efficiently, from manufacturing to data entry, allowing humans to focus on higher-order tasks.
• 24/7 Availability and Scalability: AI can operate continuously without fatigue, handling large-scale processes and tasks at speed and volume beyond human capacity.
• Precision and Consistency: AI performs tasks with high accuracy and without the variability introduced by human fatigue, error, or bias.
• Support in Decision-Making: AI provides data-driven insights, helping humans make informed decisions but not making the final judgment in complex moral or strategic decisions.
Clear Delineation:
Humans remain at the center of decision-making, creativity, and ethical oversight, while AI is a powerful tool that augments human capability in data-driven, repetitive, or scalable tasks. Both work in tandem, but the distinction lies in AI’s role as an enhancer and not a replacement for human insight, judgment, or values.
Contrary to popular belief, there’s no such thing as AI (yet). And I hope that day never comes. There’s no point in going into the details of what that world would look like and how we could potentially go there. I refuse to entertain that idea in this article. If I have to, in a separate piece, I will. But of course, that article would be fiction and the nature of this article is non-fiction.
If the end goal is to come out of this
By no means, do I mean here that humans are perfect. I understand that different individuals hold varying levels of trust in humanity, depending on many factors, which we won’t get into.
Nevertheless, against the threats posed by AI, we’ve only got each other to rely on, so I see no other way than to stand together. “Together we stand, divided we fall.” So said, Roger Waters of Pink Floyd.
“This too, shall pass.” We are in a transition period and at the of this period, we will have mastered AI. I for one don’t want to live in a society that is ruled by machines.
To that end, I have created a framework in the form of a simple Glossary/Taxonomy here: https://github.com/sindoc/knowyourai-framework
I have made the glossary available for free in RDF/SKOS format, with a bit of an extension of my own in OWL. There’s also a PDF version, as well as a version in CSV, which you should be able to import easily into the Collibra platform or your Enterprise Glossary of choice.
You can start by mapping your AI Use Cases to the terms proposed in this simple framework. This way, your AI use cases will have a risk profile associated with them.

