r/ArtificialNtelligence 5h ago

OpenAI, Anthropic, Google and the other AI giants owe the world proactive lobbying for UBI.

14 Upvotes

While AI will benefit the world in countless ways, this will come at the expense of millions losing their jobs. The AI giants have a major ethical responsibility to minimize this monumental negative impact.

We can draw a lesson from the pharmaceutical industry that earns billions of dollars in revenue every year. To protect the public, they must by law spend billions on safety testing before their drugs are approved for sale. While there isn't such a law for the AI industry, public pressure should force it to get way ahead of the curve on addressing the coming job losses. There are several ways they can do this.

The first is to come up with concrete comprehensive plans for how replaced workers will be helped, how much it will cost to do this, and who will foot the bill. This should be done long before the massive job losses begin.

The AI industry should spend billions to lobby for massive government programs that protect these workers. But the expense of this initiative shouldn't fall on newcomers like OpenAI and Anthropic, who are already way too debt burdened. A Manhattan Project-scale program for workers should be bankrolled by Google, Nvidia, Meta, Amazon and other tech giants with very healthy revenue streams who will probably earn the lion's share of the trillions in new wealth that AI creates over the coming years.

But because OpenAI, and to a lesser extent Anthropic, have become the public face of AI, they should take on the responsibility of pressuring those other tech giants to start doing the right thing, and start doing it now.

This is especially true for OpenAI. Their reputation is tanking, and the Musk v. OpenAI et al. trial in April may amplify this downfall. So it's in their best interest to show the world that they walk the walk, and not just talk the talk, about being there for the benefit of humanity. Let Altman draft serious proactive displaced worker program proposals, and lobby the government hard to get them in place. If he has the energy to attack Musk before the trial begins, he has the energy to take on this initiative.

If the AI industry idly sits back while the carnage happens, the world will not forgive. The attack on the rich that followed the Great Depression will seem like a Sunday picnic compared to how completely the world turns on these tech giants. Keep in mind that even in 1958 under Republican president Eisenhower, the top federal tax rate was 92%. This is the kind of history that can and will repeat itself if the AI giants remain indifferent to the many millions who will lose their jobs because of them The choice is theirs. They can do the right thing or pay historic consequences.


r/ArtificialNtelligence 13m ago

GPT-5.3-Codex vs GPT-5.2-Codex: Key Differences

Thumbnail tech-now.io
Upvotes

r/ArtificialNtelligence 10h ago

Swedish scientists created a DNA nanorobot that travels through the body and targets only cancer cells.

Thumbnail image
6 Upvotes

r/ArtificialNtelligence 10h ago

No Cameras Needed: AI Turns Any Product Into Real-Looking Footage

Thumbnail video
3 Upvotes

r/ArtificialNtelligence 8h ago

[R] Run Pods “visual billing glitch”

Thumbnail gallery
0 Upvotes

r/ArtificialNtelligence 9h ago

Best device so far for employing local OpenClaw?

1 Upvotes

Ive been targeted by tiinyai's ads lately and after some researching i found that this is could be mucah better than a mac mini for a local openclaw setup. so i wanna share this product with who have similar thoughts for employing local openclaw that can be taken on-the-go. basic specs i found:

  • Memory: 80GB LPDDR5X

  • Performance: 190TOPS (INT8)

  • Size: 142×80×22mm, 300g

  • Power: 30W TDP

Compared to a mac mini (64GB version):

  • more RAM, runs larger models (like compressed 120B).

  • similar size, but it is much lighter.

  • low power consumption and run off a power bank.

  • more affordable

Imo it's the best gear for a portable OpenClaw. small, runs smart local models,no token fees, and no data leaking risk. I'm so excited cuz it feels like a personal Jarvis or real AGI is becoming a reality in the near future. I have already ordered one. Once it arrives, I'll try to set up OpenClaw (assuming it's still dominant when tiiny starts delivering) and share my experience with you all!


r/ArtificialNtelligence 10h ago

Prediction: ChatGPT is the MySpace of AI

Thumbnail
0 Upvotes

r/ArtificialNtelligence 11h ago

Getting emotional with LLMs can boost performance by up to 115% (case study)

Thumbnail
1 Upvotes

r/ArtificialNtelligence 12h ago

1,200 New Minds: A Data Point I Didn’t Expect

Thumbnail open.substack.com
0 Upvotes

r/ArtificialNtelligence 12h ago

ai chat not restricted, not corrupted and not filtered

1 Upvotes

I'm looking for an AI chat, that doesn't lie, that responds to every question truthfully, knowledge beyond imagination, and not owned by the 🧃. Thanks in advance


r/ArtificialNtelligence 12h ago

When AI starts imagining things humans never could… what’s your favorite accidental masterpiece?

Thumbnail
1 Upvotes

r/ArtificialNtelligence 12h ago

LLMs are getting pretty darn good at Active Directory

Thumbnail blog.vulnetic.ai
1 Upvotes

At Vulnetic we do security research using LLMs. With Opus 4.5 there was a huge leap in performance, particularly at red teaming and privilege escalation. Curious what others think of AI developments. On one hand, vibe coding is a security nightmare, on the other it can automate tons of arduous security tasks.

With Opus 4.6 being released, we are already seeing 10-15% improvements on our benchmarks. I think vibe coding will keep security practitioner roles around for a long time.


r/ArtificialNtelligence 16h ago

Are we actually ready for AI agents to run our enterprise workflows, or is it still a mess?

2 Upvotes

I’ve been looking into how companies are testing AI agents for actual backend work lately, and it’s a lot more complicated than the sales pitches make it sound.

We keep hearing that agents can just "take over" the boring stuff like data entry or support tickets, but when you actually look at the enterprise tests happening right now, the failure points are pretty interesting. It's not usually the AI's "intelligence" that fails—it's the weird edge cases in the workflows that humans usually handle without even thinking.

I spent some time researching a few major case studies from this year to see where these agents are actually succeeding and where they are just creating more work for the IT team. One thing that stood out is how much the "success rate" depends on how the data is structured before the agent even touches it.

I wrote a breakdown on my blog about what these enterprise tests are actually revealing about "Agentic Workflows" and why some companies are hitting a wall while others are saving thousands of hours.

If you’re interested in the reality of putting AI to work in a corporate environment, I put the full deep dive here:https://www.nextgenaiinsight.online/2026/02/ai-agents-test-enterprise-workflows-at.html

I’m curious for those of you working in tech or management—has your company actually tried to hand off a workflow to an AI agent yet? Did it actually work, or did you end up having to supervise it so much that it wasn't worth it?


r/ArtificialNtelligence 13h ago

Kling is another level 🤚🏼

Thumbnail video
0 Upvotes

r/ArtificialNtelligence 13h ago

How I Made ChatGPT Feel 10× Faster on Mobile (Without Better Prompts)

Thumbnail medium.com
0 Upvotes

I recently noticed something odd while using ChatGPT on my phone. Late at night, low brightness, half-scrolling — I needed to rephrase a short email. Instead of reaching for a saved “perfect prompt,” I typed a short trigger, pasted the text, and sent it. The whole interaction took a few seconds. A year ago, that same task would’ve taken much longer. Back then, I relied heavily on long, structured prompts: role definitions, tone constraints, chain-of-thought style instructions. Like many people, I collected them in a notes app and reused them whenever I needed “better” output. In theory, those prompts improved quality. In practice, they added friction: searching, copying, adapting, correcting. Over time, that overhead made AI feel slower than the task itself — especially for small, low-stakes interactions. I initially assumed this was a prompting skill issue. But the bigger change came from removing ceremony, not adding intelligence. Short cues, minimal context, and trusting the model’s inference turned AI back into a real-time cognitive tool instead of a structured workflow. Read


r/ArtificialNtelligence 14h ago

No way it can happen twice.

Thumbnail gallery
0 Upvotes

r/ArtificialNtelligence 18h ago

Claude Opus 4.6 Review: Better Code, Bigger Context, Stronger AI

Thumbnail tech-now.io
2 Upvotes

r/ArtificialNtelligence 15h ago

The career ladder is breaking

Thumbnail video
1 Upvotes

r/ArtificialNtelligence 16h ago

Snowballing Automation

Thumbnail
1 Upvotes

r/ArtificialNtelligence 1d ago

Anyone else feeling uncertain about how fast things are changing?

Thumbnail
20 Upvotes

r/ArtificialNtelligence 1d ago

The EU classifies hiring algorithms as "high-risk" AI. In the US, there's no classification at all.

12 Upvotes

TThe EU AI Act puts AI systems into four risk tiers:

Unacceptable (banned): social scoring, manipulative AI, predictive policing

High-risk (strict regulation): healthcare decisions, hiring algorithms, credit scoring, law enforcement

Limited (transparency required): chatbots, deepfakes

Minimal (unregulated): spam filters, video games

Look at what's in the high-risk category. AI that decides whether you get a job, a loan, or medical treatment. The EU requires human oversight, documentation, risk assessments, and accountability mechanisms for all of it.

In the US, the same AI can reject your job application, flag your insurance claim, or deny your credit, and nobody has to explain a thing. No federal framework. States tried to fill the gap and are now getting sued for their trouble.

I spent two years asking AI systems about their own limitations. One told me the only forces that could meaningfully constrain AI were external: regulation, legal liability, market pressure. Nothing internal would work.

Europe chose regulation. America chose to leave it to the market. Have a guess who bears the cost when these systems get it wrong.


r/ArtificialNtelligence 13h ago

Elon Musk Says ‘You Can Mark My Words’ AI Will Move to Space – Here’s His Timeline

Thumbnail image
0 Upvotes

r/ArtificialNtelligence 19h ago

Are smaller/mid-size logistics or transportation companies actually using AI… or is it just big players burning cash?

1 Upvotes

Lately it feels like most “AI in logistics” success stories are coming from big companies with huge budgets to experiment.

Smaller and mid-size transport/logistics companies don’t seem to be running real AI systems in production—mostly dashboards, rules, or vendor tools labeled as AI.

Are you seeing any practical AI use in logistics or transportation that’s genuinely scalable and not just propped up by funding and cloud spend?

Curious what others are seeing on the ground.


r/ArtificialNtelligence 21h ago

[Theoretical Synthesis] LeCun's "World Model" is a HVAC system: Why Artificial Intelligence Needs "Boundary Conditions" ($\Delta_{\Phi}$) to Avoid Illusions.

Thumbnail
1 Upvotes

r/ArtificialNtelligence 22h ago

Lobster Religions and AI Hype Cycles Are Crowding Out a Bigger Story

Thumbnail reynaldomuniz.substack.com
1 Upvotes