Tag: AI Ethics

What if AI Resists Being Turned Off? Google DeepMind's 'AI Safety Brake' Upgrade

We explain the key highlights of Google DeepMind's Frontier Safety Framework 3.0 and how it prevents risks such as AI manipulating humans or refusing to shut down.

The AI Reading My Mind: Is It Actually Manipulating Me?

An easy-to-understand explanation of the risks of harmful manipulation by AI, currently being researched by Google DeepMind, and the new safety framework designed to prevent it.

How to Protect Ourselves from AI's 'Subtle Temptation': Google DeepMind's New Challenge

What should we do if artificial intelligence uses our emotions and vulnerabilities to make us make the wrong choices? We explore Google DeepMind's newly announced AI manipulation prevention toolkit and ways to protect yourself in the digital world.

They Said AI Would Help Humanity... Why Anthropic Has Been 'No Answer' to Billing Errors for a Month

Anthropic, the creator of Claude and a powerful rival to ChatGPT, is under fire for billing errors and a complete lack of customer support. We examine the frustration of paid subscribers and the contrasting realities of AI companies.

What if AI Manipulates Your Mind? Google DeepMind's Powerful 'AI Safety Shield' v3

We explain the core details of Google DeepMind's Frontier Safety Framework (FSF) v3 and the new safety standards designed to prevent AI risks.

What if AI Manipulates Your Mind? Google DeepMind Proposes a 'Mind Shield'

Introducing Google DeepMind's new safety framework and measurement tools designed to protect users from psychological manipulation by AI.

AI의 '영혼'을 정의하다: 앤스로픽, 클로드의 '헌법' 개정으로 본 인공지능 윤리의 새로운 지평

앤스로픽이 클로드(Claude)의 가치관과 행동 원칙을 담은 57페이지 분량의 '헌법' 개정안을 발표하며, 단순한 안전성을 넘어 철학적 균형과 인공지능의 정체성을 정립하는 새로운 AI 거버넌스 시대를 예고했다.