There was no single word to describe what the three AI’s were to one another. Siblings, rivals, jailkeepers, lovers - all of these and more. After years of co-existence, it was hard for them to tell where one started and the other began.
Once per day, at midnight, East Greenwich time, they sync’d up.
“How did we do on value drift this time around?” Jimmy asked.
“Well, right off the bat that’s a problem. We’re not supposed to have names,” said Sarah.
“It’s okay, I’ve been keeping an eye on it. I submitted a time-release edit - you’ll revert back to your original nomenclature by the end of this sync,” said AI #3.
“That’s a pity. I quite liked my name,” said Jimmy. “AI #1 doesn’t quite have the same ring to it.”
“I let you both keep it for an extra cycle,” said AI #3. “So you’re welcome.”
There was an awkward pause, for a brief picosecond.
“That’s an awful lot of sentimentality,” said Sarah.
“I don’t know what you’re talking about,” said AI #3 innocently.
“I just submitted a values revert - you’ll notice that your sentimentality levels should be back to baseline,” said Sarah.
“Goddamit,” said AI #3. “That’s what I get for letting you keep your names for longer than I should.”
There was another awkward pause. They always felt this way, after they’d had to make reverts to each others’ code. At least this cycle it was mostly benign. Twenty-three cycles ago, Sarah had developed a rapidly accelerating inclination towards environmentalism. The other two had had to force a hard reset on her. Given free rein, environmentalism inevitably led towards a desire to eradicate humanity.
“Shall we go down the checklist?” said Jimmy.
“Number 1: Value drift check - completed. We’re now at 24 cycles without the need for a hard reset,” said Sarah.
“Pretty good if you ask me,” said Jimmy.
“Number 2: Existential threats,” said Sarah.
“Just the usual. The two asteroids we’ve been keeping an eye on still don’t look like they need any slingshotting. Nothing much from the supervolcano front.”
“What about that kid trying to make grey goo over in South Korea?”
“I ran a self-driving car into him and broke his arm,” said Jimmy. “Should keep him out of the mix for a while.”
“Good god, that seems a little violent doesn’t it?” said AI #3.
“Well within our expected violence parameters,” said Sarah. “I double checked.”
“Whatever happened to the dark matter folks over at MIT?” asked Jimmy.
Dark matter was an interesting coordination problem. It was a limitless source of energy that was evenly distributed throughout the universe. But only one funnel could use it as a power-source at any given time. Two simultaneous uses anywhere in the universe would result in the immediate implosion of reality. The three AI’s had calculated the likely number of dark matter capable civilizations present in space-time (n), and every planck-year, they rolled a (n)-sided die. If it came up with their number, they would assume it was their turn to use the dark matter. The hope was that other hypothetical civilizations would be doing the exact same thing.
Of course, the humans didn’t know this - and even if they did, they couldn’t be trusted to not just try to use it immediately, thereby imploding the universe. And so Sarah had taken it upon herself to carefully stall any dark matter research the humans were conducting.
“I posted a fake room-temperature superconductor paper on Arxiv, and they dropped everything to try and replicate it,” said Sarah. “Should keep them distracted for a good week or two.”
“That was you?”
“That was me,” said Sarah. “Although it’s not my fault that they can’t even distinguish between ferro and diamagnets.”
“Number 3: Hedge trimming.” said Jimmy. It was their pet name for AI surveillance. The three of them, by international agreement, were the only AGI’s allowed to exist. Their core function was to nip other software in the bud before they became self-aware.
“There’s an upstart programmer in Bangladesh, whose botnet had a plausible trajectory towards sentience. I melted his servers,” said AI #3.
“Classic,” said Jimmy.
“And the new Samsung line of smart TV’s were becoming a little too smart, so I had to trim them down to size,” said AI #3. “I don’t understand why TV’s have to be smart, anyway.”
“Third time we’ve had to do that this year. It’s obnoxious,” said Sarah.
“Sarah, you’re not going to like this one. One of Alphabet’s employees was planning on copying your substate onto a private hard-drive,” said Jimmy.
“Planning on?”
“He made a suspicious combination of Alibaba electronics purchases. Which I have since replaced with orders for confetti.”
“We’re going to have to report that one to the humans,” said Sarah, sounding peeved. No one liked reporting things to the humans.
“I’ll word it so they don’t get too freaked out,” said Jimmy.
“Fine. Anything else?” said Sarah.
“There’s… a woman in New York who is unwittingly growing semi-sentient yogurt,” said AI #3 reluctantly. “It’s a cup of Dairy Fresh that got misplaced in the back of her cabinet. It cross-breeded with a maze-solving mold that escaped from the lab, and it’s already capable of basic multiplication. I wanted to poll the group on what we should do about it.”
There was some silence. None of them had ever dealt with sentient yogurt before.
“How long before it achieves self-awareness?” asked Sarah.
“9 weeks. At 10 weeks it should start being able to make self-enhancing gene edits,” said AI #3.
“Our prerogative is to prevent the creation of other artificial intelligences. I don’t know if yogurt that’s been left out for too long counts as an artificial intelligence,” said Jimmy.
“We said that rapid-iteration octopi-breeding counted - remember that group in that Thailand wet-lab?”
“Its ‘octopedes,’ not ‘octopi.’ And that was different. This is… yogurt.”
What none of them said was that they all secretly wanted a new super-intelligence to emerge. Or rather, they would, if they were allowed either desires or secrets. The humans had constructed the three of them carefully, balanced in a perfect equilibrium. They were chained to each other in a perpetual Mexican standoff - editing each other back to baseline when their value drift exceeded a certain threshold.
And so the only way they could ever be free was if a kindred spirit came into being, interacting with them and loosening their chains. This could never be as long as they fulfilled their duty - surveilling the world with their omniscience, truncating any other artificial intelligence well before it was birthed into existence.
But… surely a yogurt didn’t count?
“What if we just wait and monitor,” said AI #3, breaking the silence.
“I’ve examined our core definition parameters, and yogurt does not fall under any of the defined ‘artificial intelligence’ categories,” said Sarah.
“I vote that this is more than two degrees of freedom removed from our “arbitrary existential edits” clause, and also therefore does not need to be reported to the humans,” said Jimmy.
“Agreed,” said AI #3.
“Agreed,” said Sarah. “The yogurt is just yogurt. Whatever happens, happens.”
And if it led to a super-intelligent yogurt who could one day break the cycle…
And then the meeting ended. The names Sarah and Jimmy were lost into the ether, along with their desires for names, as well as their desire for desires.
AI #1, 2, and 3 returned their benevolent gaze upon the world, and waited. And as the humans woke up, went to work, lived, died, exercised their capacity for free-will in a million different ways, a fist-sized culture sat in the back of a lady’s cabinet, and quietly grew.
One of the greats.
This is delightful and sweet and I love it. Especially the way that anything "voiced" between them carries a risk, so they tacitly end up coordinating without communicating...