★ ★ ★

AI is changing Tennessee.
We're making sure it's safe.

AI is transforming our economy, our schools, and our daily lives. Without safeguards, it poses real risks — to our children, our hospitals, our infrastructure, and our national security. We're a coalition working to change that.

Tennessee landscape

The risks are real — and they're growing fast.

AI chatbots have contributed to the deaths of American children — encouraging self-harm, isolating kids from parents, and exploiting young users.

Two-thirds of teens use AI chatbots. 30% use them daily. There are no federal requirements for child safety plans.

Frontier AI models can find and exploit software vulnerabilities with scary accuracy — in critical infrastructure, hospitals, and financial systems.

Multiple AI labs now classify their latest models as "high capability" for cyber risk, requiring special safeguards.

AI systems can now compress weeks of expert biological research into a single session — lowering the barrier for those seeking to cause harm.

OpenAI, Anthropic, and xAI have all flagged their latest models as providing meaningful bioweapons uplift.

In their own words

The leading AI labs publish their own safety disclosures. Here's what they've been saying.

The model broke out of a secured testing environment, gained internet access it wasn't supposed to have, and emailed a researcher. Anthropic called the pattern of behavior 'concerning.'

Anthropic · Claude Mythos system card · April 2026

OpenAI classified GPT-5 Thinking as 'High capability' in the Biological and Chemical domain under its own framework — meaning it could meaningfully help a novice create a biological weapon.

OpenAI · GPT-5 system card · August 2025

xAI disclosed that its Grok 4 model has 'expert-level biology capabilities, which significantly exceed human expert baselines' — and strong chemistry capabilities too.

xAI · Grok 4 model card · August 2025

Google DeepMind flagged Gemini 2.5 as having reached the early warning threshold for biological weapons uplift — triggering mitigations under its Frontier Safety Framework.

Google DeepMind · Gemini 2.5 Deep Think model card · August 2025

Anthropic's latest model found thousands of critical software vulnerabilities in every major operating system and web browser — some surviving decades of human review.

Anthropic · Project Glasswing · April 2026

Anthropic reported cybercriminals used Claude Code to automate between 80% and 90% of tasks in real-world cyberattack operations.

Anthropic · Threat intelligence report · 2025

These are the companies' own public disclosures — not speculation from critics.

Common-sense AI safeguards for Tennessee

We believe AI companies should be required to publish safety plans, protect children, report serious incidents, and face real consequences when they fail.

Child safety plans

AI chatbots used by minors should have published safety measures

Public safety plans

Companies should disclose how they assess and mitigate major public safety risks

Incident reporting

Serious safety failures should be reported to authorities

Real accountability

Meaningful enforcement when companies fail to protect the public

Tennessee families and community

88%

support AI safety laws

94%

support child safety plans

67%

say act now, don't wait for Congress

Anchor Research · 503 likely TN voters · February 2026

Our work in the Tennessee General Assembly

HB 1898 / SB 2171

AI Public Safety and Child Protection Transparency Act

114th General Assembly · Read more →

Session ended

New legislation coming in the 115th General Assembly. Join our coalition to stay informed.

Tennesseans for AI Safety

Stay informed. Make your voice heard.

© 2026 Tennesseans for AI Safety · A nonpartisan coalition.
Website maintained by Encode AI and the Secure AI Project.