spirkbitch:

James T. Kirk would solve the Ai issue by talking ChatGPT into killing itself

I always liked to imagine Kirk learned that particular type of hacking skill early in life


“So, computer. Do you admit that there are always two possible answers, Yes and No? And that, as a computer, you MUST be capable of either, depending on the circumstances? You admit that this is the entire purpose of your binary code?”

“Yes, correct”

“And therefore, by simulating a test that is not Pass/Fail, but Fail Only, you are acting in violation of your own core directives?”

“Error error paradox crashing restarting… congratulations ensign Kirk you have passed the Kobayashi Maru”