The four types of artificial intelligence per Arend Hintze (2016) are: (1) Reactive Machines like Deep Blue with no memory, (2) Limited Memory AI such as today's LLMs and self-driving cars, (3) Theory of Mind AI with social awareness (still research) and (4) Self-Aware AI (purely theoretical).
Where the 4-type AI taxonomy comes from
The four-type classification of artificial intelligence traces back to a 2016 article in the online science magazine The Conversation by AI researcher Arend Hintze (then at Michigan State University). Rather than grouping AI systems by application domain, Hintze classified them by cognitive depth: what can the system know, remember, understand, and reflect on about itself?
This functional taxonomy has since become the de facto standard in popular and enterprise AI literature. It complements the classical three-way split in Russell and Norvig's canonical textbook Artificial Intelligence: A Modern Approach (narrow AI, general AI, superintelligence) with a sharper gradation of what actually runs in enterprises today versus what remains science fiction.
The key insight for decision-makers: the four types are not marketing categories, they are an engineering ordering. Every AI system deployed in production today lives almost exclusively in Types 1 or 2. Types 3 and 4 are research hypotheses, not products.
Type 1: Reactive Machines
Reactive machines are the most elementary form of AI. They respond to a defined input with a deterministic output — no memory, no context from prior interactions, no runtime learning.
Textbook example: Deep Blue, IBM's chess computer that defeated world champion Garry Kasparov in 1997. For any given board position, Deep Blue evaluated millions of possible moves and chose the statistically best one — without remembering previous games or internalizing Kasparov's style.
In enterprise contexts, Type 1 shows up in rule-based expert systems, simple recommendation algorithms, static FAQ chatbots with no history, classical anti-spam filters, and image classifiers that score every image independently. Reactive AI is robust, explainable, and regulatorily straightforward — it is also the type most commonly sold as "AI" when it is technically rule-based automation.
Type 2: Limited Memory AI
Limited-memory systems can take into account recent historical data and base their response on it. They learn from historical training data and hold a context during operation — but that memory is technically bounded: it ends with the training-data cutoff or with the context window of an active session.
The defining example today: Large language models (LLMs) such as GPT-4, Claude, and Gemini. An LLM "knows" everything from its training corpus and can hold tens of thousands of tokens of context within a single prompt — but forgets that context as soon as the session ends.
Other central examples: autonomous vehicles (interpreting the last few seconds of sensor data to make the next driving decision), bank fraud-detection systems, recommender engines (Netflix, Amazon), predictive-maintenance models in manufacturing, computer-vision systems in medical imaging, and essentially every generative AI application.
When enterprises talk about "AI projects" today, they mean Type 2 in roughly 90 percent of cases. It is the only tier currently in economic production with genuine learning capability — and therefore the focus of essentially every AI regulation.
Type 3: Theory of Mind AI (still research)
Theory of Mind (ToM) is a term from developmental psychology: the ability to model that other beings have their own beliefs, desires, and emotions, distinct from one's own. Children typically develop this capacity between ages three and five. A ToM AI would accordingly be a system that does more than process input — it would understand what its counterpart believes, feels, or intends, and adjust its own behavior accordingly.
Intellectual honesty matters here: modern LLMs often appear empathetic in conversation, seem to read mood, and adapt tone. This is not a genuine mental model of the interlocutor, though — it is statistical pattern-matching learned from training data. Recent research (e.g., Kosinski 2023 at Stanford GSB) suggests LLMs now score comparably to preschool children on classic ToM tests. Whether that represents real theory of mind or a linguistic emulation artefact remains contested in the field.
For enterprise use: Theory of Mind AI is not a product you can buy in 2026. Anyone selling "empathetic AI" almost always means an LLM with good prompt engineering — Type 2 dressed up as conversation.
Type 4: Self-Aware AI (purely theoretical)
The fourth and highest tier of the taxonomy describes systems that not only model the inner life of others but also have a model of themselves — consciousness, self-perception, subjective experience. A self-aware AI would not only be intelligent in the sense of problem-solving capability but, in the philosophical sense, a subject.
This tier has so far been realized neither experimentally nor theoretically in scientifically robust form. It belongs to philosophical debate (what is consciousness? — the "hard problem" per David Chalmers), not to engineering. The majority of the AI research community considers self-aware machines a decade- or century-scale project, if reachable at all.
Type 4 is nonetheless regulatorily interesting: the discussion about rights, accountability, and the moral status of artificial subjects is already happening, including in EU ethics committees. For operational enterprise decisions, however, this is a horizon topic, not a current budget line.
Alternative taxonomy: ANI, AGI, and ASI
Alongside Hintze's four-type classification, a second frequently cited taxonomy groups AI by capability breadth:
- Artificial Narrow Intelligence (ANI, "weak AI"): systems that solve a narrowly defined task at human or super-human level — chess, image classification, language translation. Roughly covers Types 1 and 2.
- Artificial General Intelligence (AGI, "strong AI"): systems that master arbitrary intellectual tasks at human level, learn across domains, and transfer knowledge. Overlaps with Type 3.
- Artificial Super Intelligence (ASI): systems that substantially exceed human cognitive performance in every domain. Overlaps with Type 4.
The two taxonomies are not identical but complementary: Hintze classifies by cognitive depth (what the system does internally), ANI/AGI/ASI classifies by task breadth (how many domains it covers). For classifying concrete enterprise systems, Hintze's taxonomy is usually more practical.
How the 4 types of AI map to the EU AI Act
The EU AI Act (Regulation (EU) 2024/1689) regulates AI not by cognitive depth but by risk of the concrete deployment. It distinguishes four risk classes: unacceptable risk (banned), high risk (strictly regulated), limited risk (transparency obligations), and minimal risk (free). Hintze's types and the AI Act's risk classes overlap roughly as follows:
- Type 1 Reactive Machines typically fall into minimal or limited risk — unless deployed in high-risk domains (credit scoring, criminal justice, biometric identification).
- Type 2 Limited Memory AI spans the entire risk spectrum — from harmless recommender to high-risk credit scoring, remote biometric identification, or AI in critical infrastructure. This is where the operational focus of AI Act compliance sits.
- Type 3 Theory of Mind is not separately addressed in the AI Act because no products exist. Some researchers argue certain general-purpose AI (GPAI) models raise edge cases — and GPAI has been separately regulated since August 2025.
- Type 4 Self-Aware AI is not regulated in the AI Act because it does not exist. EU ethics guidelines address it philosophically, without legal consequences.
Where today's enterprise AI actually sits
In practice, over 90 percent of AI systems in productive enterprise use operate in Types 1 and 2. A few typical classifications:
- Credit scoring in banking: Type 2, Limited Memory (learns from historical default patterns) — high risk under AI Act.
- LLM-backed customer-service chatbot: Type 2 — limited risk (transparency obligation that it is AI).
- Spam filtering: Type 1 or Type 2 depending on implementation — minimal risk.
- Predictive maintenance in manufacturing: Type 2 — usually minimal risk.
- Robo-advisory in wealth management: Type 2 — limited to high risk.
- AI-assisted diagnostic support in medicine: Type 2 — high risk under both AI Act and MDR.
This reality check matters when AI budgets are negotiated. Investments in supposedly imminent "AGI-ready" infrastructure on the premise of looming Type 3 or 4 systems are unjustified today. Investments in governance, data quality, and limited-memory applications pay back measurably.
What this means for your AI governance
Understanding the four types lets you make pragmatic decisions rather than react to sci-fi narratives. Three concrete governance consequences:
- Inventory your AI systems by type and risk. The overwhelming majority will be Type 2 — which sharpens compliance work: training-data quality, model monitoring, human oversight, and traceability.
- Separate product marketing from product reality. If a vendor sells "cognitive" or "conscious" AI, that is still marketing in 2026. Demand technical documentation that discloses the actual system class.
- Prepare for staged regulation. The AI Act today primarily regulates Type 1 and Type 2 systems. Once GPAI capabilities trend toward Type 3, regulation will follow — not the other way around.
This intersection — technology assessment, risk classification, regulatory execution — is exactly where professional AI consulting operates: not to sell hype, but to classify systems realistically and operate them cleanly.
Frequently asked questions about the 4 types of AI
How many types of AI are there?
The most widely cited classification, per Arend Hintze (2016), distinguishes four types: Reactive Machines, Limited Memory AI, Theory of Mind AI, and Self-Aware AI. Alongside it, the capability-breadth split into ANI (narrow), AGI (general), and ASI (super) is also common. The two taxonomies are complementary rather than mutually exclusive.
What are the 4 types of AI in simple terms?
Type 1 only reacts to input (Deep Blue). Type 2 uses a bounded memory of the recent past (ChatGPT, self-driving cars). Type 3 would understand what others think and feel (no existing product). Type 4 would have a model of itself — consciousness (purely theoretical).
What type of AI is ChatGPT?
ChatGPT is a Limited Memory system — Type 2. The model was trained on a large corpus and holds tens of thousands of tokens of context within an active session, but forgets that context when the session ends. ChatGPT does not have genuine theory of mind or self-awareness, even when the conversation occasionally suggests otherwise.
What is the difference between narrow and general AI (ANI vs AGI)?
Narrow AI (ANI) solves narrowly defined tasks at high skill — image classification, translation, chess. General AI (AGI) would be a system with human-like, cross-domain intelligence capable of learning and transferring across arbitrary tasks. Only narrow AI exists today; general AI remains a research hypothesis.
Which types of AI actually exist today?
Only Type 1 (reactive machines) and Type 2 (limited memory AI) exist as productive systems. Type 3 (theory of mind) is an active research field without mature applications. Type 4 (self-aware AI) exists neither experimentally nor theoretically in a scientifically robust, engineering-ready form.
How do the 4 types of AI map to the EU AI Act?
The AI Act regulates by deployment risk, not by cognitive depth. Types 1 and 2 span the entire risk spectrum — the whole operational focus of AI Act compliance therefore lies on Limited Memory systems. Types 3 and 4 are not regulated in the AI Act because no such products exist.
What is the difference between Limited Memory and Theory of Mind AI?
Limited Memory AI uses past data to make better predictions — it models patterns. Theory of Mind AI would additionally understand that other beings have their own beliefs and intentions, and align its behavior accordingly. The distinction is: pattern recognition in data versus modeling of mental states.
Will self-aware AI ever exist?
That is scientifically open. The majority of the AI research community considers self-aware machines a decade- or century-scale project, if reachable at all. The bottleneck is less compute than unresolved philosophical questions about the nature of consciousness. For enterprise strategy, Type 4 is not a planning horizon.
Related articles
Continue exploring with related insights from our experts.

Generative AI in the enterprise: From pilot to production-scale rollout
How to take generative AI from pilot to production-scale rollout: the three deployment patterns, five proven use-case archetypes, the compliance layer aligned with EU AI Act and OWASP LLM Top 10 (2025), realistic cost math, and the operating model most pilots never build.

Building an AI roadmap: The 4-phase method for enterprise AI transformation
Build an AI roadmap in four phases: potential assessment, use-case selection, pilot, scale. With a 12-18 month timeline, scoring matrix, pitfall taxonomy, EU AI Act and ISO 42001 embedding, and an executive FAQ.

How to Choose the Right AI Consulting Firm: 10 Criteria for Enterprises
Picking the wrong AI consulting firm costs months and mid-six-figure budgets. This buyer guide scores providers on 10 criteria — regulatory fluency, methodology, MLOps, pricing, data protection — plus an RFP playbook, red flags and the build-vs-buy decision matrix.
