Hello everyone — back again for another installment of project-o-rama: The Mind of Man edition. Yesterday, I talked about the main character (still unnamed, but not to worry) and so today I am going to work around one of the key incidents in the story’s past, and hopefully work through something that’s key to the plot: why is AI forbidden.
In order to start with that, I am going to refer to something that I had mentioned earlier in another post: The Page Wave Incident. Something happened a couple of generations ago that made people leery of AI. What is this incident? Here is the working idea (Warning: Spoilers for another future sci-fi novel are below, but given the pace that I work on things, I think you’ll be safe for the most part).
In A Game of Chinese Whispers, there is an Internet that can be accessed one of two ways: through something that works like Google Glasses with a very limited interface, or through the brand spanking new direct neural contact. Absolutely nothing can go wrong when you give everyone a way to interact directly with other people by shoving metal plates into their brains and their training consists of ‘think of these words if you want to turn it on and these where you want to turn it off’. Well, in the initial post-wackiness, people still wanted the direct neural contact (alphabet groups and the such), but they wanted a way to keep the rioting to a minimum.
Enter a heuristic, quantum trinary system to monitor all of it. It has several parts to it. The monitoring parts were called Huginn and Muninn. Huginn compared current activity of a network with past activities, looking for anything that stood out. Anything standing out was sent to Munnin that looked at the data and compared it with other information about that particular node: mostly looking for indicators of known abnormal behavior (can we do that now? Yes.). Enough hits come up and two other programs are sent out called Geri and Freki. These programs isolate the problem before it can get too big. The steps can vary from a simple prompt to get some help to completely isolating them from the network — even going as far as contacting the authorities if it looks like there’s going to be either self-harm or harm to others.
This system starts small, watching everyone coming in and out of a geographic area to establish some sort of baseline. As it works and learns, it starts to make changes to its own programming to better allocate resources and predict human behavior. Algorithms are discarded when they’re no longer useful and others are picked up. it goes into the existing Internet to learn more. Eventually, all the nodes and information squeeze together in a moment of critical mass. Huginn, Muninn, Geri and Freki all blend together to make what they call… I have no idea what it calls itself. That’s beside the point. The system gets more resources, learns more about the people it’s monitoring…even learns how to directly control people.
However, before the scientists can throw the kill-switch the system shuts itself down. The ravens and the wolves separate themselves and self terminate. Hard programmed into the system was a set of criteria: if any system got to point where it could interfere with the continuing well-being of an individual, or network it would shut down and away updates.
The scientists all heaved a huge sigh of relief — thinking they dodged a bullet, they quickly tore down the system and made sure that no one could have access to the materials or the core programming. AI was declared a dangerous crap shoot; a danger worse than genetic engineering, atomic bombs and cancellation of Firefly all rolled into one.
What they didn’t know was that the system made back-ups of itself. One of these backups managed to evade the initial sweep. Did I ever mention how much information DNA could store? 700 terabytes in 1 gram. I’m envisioning the container that has the back up being about the size of a loaf of bread and weighing in at 2 kilograms (about 5 pounds). That’s a huge program. Does it have any sort of end goal? I don’t know truth to be told. I’m still working that out in my head.
So — that’s why AI isn’t allowed in this world. Next time, I’m going to contemplate some of the themes that are going to shape this story. I hope I won’t be boring. Thanks for dropping by.
Sincerely,
Seething Apathy