18 Comments

As someone with a safety role considering all the things people regularly search Google for, like material safety data sheets, asking an AI what to do in any dangerous situation is:

1) Asking to die from inaccurate or just blatantly wrong advice.

2) A liability nightmare as I'm not sure Section 230 is going to protect the AI's owner.

Will the AI tell you to put out a vinyl chloride fire with water? Will it tell you to wear a HEPA filter respirator? If it did, you are about to die.

Expand full comment

Humanity could have used $the 10 billion for something better than degenerative AI! Great points as usual in particular the point about U.S centric free speech is spot on.

Expand full comment

I for one look forward to the cataclysm to come when people maybe finally learn that computers shouldn't be blindly trusted.

School of hard knocks is better than global tech ignorance when tech heavily impacts everyone's lives.

Generally speaking, tech is as mysterious as electricity to most but maybe even more consequential. If a light doesn't come on, you are in darkness. If it comes on and lets you read that the nazis did nothing wrong... well, that's arguably way worse.

If this stuff causes some disillusionment and realization that a wizard was behind the curtain all along, bring it on! We need that yesterday.

The costs of both training and running this stuff though? These costs will come down drastically due to hardware and software improvements. Not really, because that will, for awhile, only mean they train it harder on bigger data soaking up the "gains".

The same rush to market has often meant code that gets the job done, not necessarily performant, robust, or user friendly code. If you're inclined and go spelunking into the available nuts and bolts on your own, or maybe if already have, you'll know exactly what I am saying.

Expand full comment

Butlerian Jihad here we come. This stuff is wild - i have less than zero faith in these companies and these products. In my narrow but deep slice of the tech world a few of us have fed chatgpt a few questions nad gotten dramatically wrong answers. Its just making shit up.

Answers w/o context even when correct will lead to all sorts of bad outcomes. This shit is a timebomb.

Expand full comment

The whole article got some pretty valid points, what's "funny" (not really but being sad wouldn't be of use so) is that in essence those same criticisms can be made to human learning and human related content production because the learning models through schooling methods are pretty similar hence can possibly be pretty much as biased as those of those degenerative Al.

The issues with learning methods producing high rates of bias are real, not really new though.

Expand full comment
Feb 16·edited Feb 16

We'll know these programs are actually intelligent when they become super depressed.

Expand full comment

Massive opportunities for startups focusing on AI safety and fact checking. One thing that stands out clear as day, Google and Microsoft won't be able to be protected by S230, as well as not pay dataset sources for using their data. You either rewrote it (S230 out the window, no need to pay dataset) or you used another source (S230 protection but now have to pay sources).

Expand full comment

What is the evidence of harm so far?

Expand full comment

I'm as Google free as I can possible be. The world doesn't need to hear my thoughts if it's on a Google platform. It's that simple. There are many other alternatives now - shutting the hell up is also one of them.

Expand full comment