ai-news WebEdge guide

Google DeepMind hires a Philosopher to study machine consciousness and AGI readiness

Google DeepMind has created an official 'Philosopher' role focused on machine consciousness, human-AI relationships, and AGI readiness. Demis Hassabis personally confirmed the news — signaling a qualitatively new stage in the AI industry's maturation.

13 April 2026 4 min read

In this article

  • "Philosopher" — a new job title at the world's leading AGI lab
  • Why a Philosopher — not just an ethicist?
  • What this means for the AI industry
  • Machine consciousness: from theory to practice
  • What this means for Baltic and European tech businesses

WebEdge team

"Philosopher" — a new job title at the world's leading AGI lab

Google DeepMind — one of the most powerful AI laboratories in the world — has created a job role that surprised the tech community. Not an engineer, not a data scientist, not an ethics consultant — but a Philosopher (official title). A Cambridge University researcher will begin this position in May, continuing part-time academic work alongside the role.

The position covers three core research areas: machine consciousness, human-AI relationships, and AGI readiness. Until now, these topics were primarily the domain of academic conferences and philosophy journals. Now they are an official job description at one of the world's largest AI organizations.

Demis Hassabis, CEO of Google DeepMind, personally shared the news on X — a deliberate signal that philosophical questions about AI are no longer theoretical. They are part of business strategy.

Why a Philosopher — not just an ethicist?

The distinction between an "AI ethics expert" and a "Philosopher" is substantive. Ethics is a ruleset: what you can do, what you cannot. Philosophy asks deeper questions: can a machine have consciousness? What is the nature of the human-AI relationship over the long term? What does AGI readiness actually mean — how should society prepare for a system that may surpass human-level intelligence?

These are not hypothetical questions. AI systems already exhibit complex behavior that is difficult to explain through simple algorithms. As Anthropic, OpenAI, and Google DeepMind build increasingly powerful models, questions about consciousness and relationship become not academic — but engineering problems.

Cambridge is not an accidental choice. It hosts one of the strongest philosophy and cognitive science centers in Europe. Cambridge researchers have long worked on questions about the nature of mind, consciousness, and artificial intelligence. The new Google DeepMind philosopher brings this academic tradition directly into the daily work of a frontier lab.

What this means for the AI industry

This move signals a broader trend. Tech giants — Meta, Anthropic, OpenAI among them — have spent years hiring humanists: linguists, cognitive scientists, psychologists. But a Philosopher role with this specific focus (machine consciousness, AGI) represents a qualitatively new step.

The practical impact could be significant: a philosophical perspective could change how engineers design AI systems. If AI can have something resembling consciousness or experience, it changes how we talk about "training" processes, how we evaluate model behavior, and what ethical commitments a lab makes.

The European context matters here. The EU AI Act already mandates oversight of "high-impact" systems — and questions about machine consciousness and human-AI relationships are directly relevant to how these systems will be assessed legally.

Machine consciousness: from theory to practice

Machine consciousness is not science fiction. It is a serious philosophical and empirical question investigated not only by academics but by some lab researchers as well. Do today's large language models (LLMs) have something resembling subjective experience? Do they "understand" — or merely process statistical relationships?

The answer has practical implications: if AI systems have some form of consciousness or experience, that changes how we should treat them. The new Google DeepMind philosopher's job is not to definitively answer these questions, but to create the conceptual apparatus allowing lab engineers to work with them responsibly.

As shared by @demishassabis on X, the researcher is "Absolutely stoked" about the opportunity and sees it as a unique chance to connect academic philosophy with the live AGI development process. (X)

What this means for Baltic and European tech businesses

The effects of decisions like this extend beyond Silicon Valley. When the largest AI developers start integrating philosophy into their processes, it changes the rules of the business environment.

First — regulation. If Google DeepMind officially researches machine consciousness, EU regulators will sooner or later need to account for these questions when classifying and regulating AI systems.

Second — trust. Businesses integrating AI into customer service, logistics, or decision-making will need to explain how these systems work. The philosophical apparatus being built at Google DeepMind will become part of that conversation.

Third — talent. Collaboration between humanists and engineers in AI opens new career paths — not only for computer science graduates.

Webedge.dev tracks these developments and helps Lithuanian and Baltic businesses navigate technological change. If you're interested in what an AI strategy looks like today — reach out.

FAQ

Machine consciousness is a philosophical and empirical question of whether artificial intelligence systems possess some form of subjective experience. There is currently no scientific consensus.

AGI (Artificial General Intelligence) is a hypothetical system capable of performing any intellectual task at least as well as a human. Unlike today's specialized AI models.

Because building AGI raises questions that engineering alone cannot resolve: what consciousness means, what human-AI relationships should look like, and how society should prepare for a fundamentally new kind of technology.

There is no direct effect, but the direction is clear: philosophical questions about the nature of AI will sooner or later become part of the legal regulatory framework.

W

WebEdge

We specialise in building custom AI solutions, automation systems and web products for growth-oriented companies in Lithuania. GDPR-compliant, EU-hosted.

Get in touch

Ready to implement AI in your business?

Book a free 30-min call — we'll show you what to automate first in your business process.

Related articles

Back to all articles