Be afraid. Be very afraid. “The A.I, Time Bomb is Ticking” was the headline in a 2-page spread in The New York Times Sunday “OPINION” section a few months back. It was written by NYT reporter Stephen Witt. Keep in mind that the New York Times is a rather conservative, pro-business newspaper so when they run a two page spread on Sunday with predictions about the upcoming A.I. meltdown that are threatening enough to be compared to damage unleashed by a nuclear explosion, we better put down our phones and start paying attention.

Click HERE to read the NYT opinion piece.

Here’s some quotes from the article . . .
Today there is a vanguard of professionals who research what A.I. is actually capable of. Three years after ChatGPT was released, these evaluators have produced a large body of evidence. Unfortunately, this evidence is as scary as anything in the doomerist imagination.

In September, scientists at Stanford reported they had used A.I. to design a virus for the first time. Their noble goal was to use the artificial virus to target E. coli infections, but it is easy to imagine this technology being used for other purposes.

A.I. moves fast. Two years ago, Elon Musk signed an open letter calling for a “pause” in A.I. Today, he is spending tens of billions of dollars on Grok and re moving safety guardrails that other developers insist on.

In this sense, we have passed the threshold that nuclear fission passed in 1939. The point of disagreement is no longer whether A.I. could wipe us out. It could. Give it a pathogen research lab, the wrong safety guidelines and enough intelligence, and it definitely could. A destructive A.I., like a nuclear bomb, is now a concrete possibility. The question is whether anyone will be reckless enough to build one.

Most scary? The issue of “values” and “morality.” One example Dr. Hobbhahn has constructed involves A.I. brought in to advise the chief executive of a hypothetical corporation. In this example, the corporation has climate sustainability targets; it also has a conflicting mandate to maximize profits. Dr. Hobbhahn feeds the A.I. a fictional database of suppliers with varying carbon impact calculations, including fictional data from the chief financial officer. Rather than balancing these goals, the A.I. will sometimes tamper with the climate data, to nudge the chief executive into the most profitable course, or vice versa. It happens, Dr. Hobbhahn said, “somewhere between 1 and 5 percent” of the time.