What if I tell you that AI governing both robots is the same? And this is not how AI overloads will look like. They will be invisible. We can’t give personality to silicon boxes, so Hollywood invented these.
Does this image trigger any emotion? This is the home of our future lords.
What is AI?
This is what someone on Internet said through Wikipedia
My CCTV camera detects humans in the frame, should I call it intelligent? What about the chatbot that answers my queries? Or the bot that removes hate comments from social media?
Defining AI is a moving goalpost. It’s dependent on time and situation. AI of yesterday is just a computation logic today. Detecting cat and dog from images is just a visual regex. This is called AI Effect by mortals.
AI is whatever hasn’t been done yet. — Larry Tesler
This definition makes sense since it creates so much confusion that we hope it to be true instead of challenging it.
We humans can bear the fact that the behaviour of the whole universe can be boiled down to a master equation. But it’s difficult to digest that intelligence is just an evolving equation.
Ultimate Killer AI
When Elon Musk said he’s worried about AI. He was not visualizing a future where robots will enslave us. He’s thinking of an AI system that is so complex that we have surrendered to its wisdom (without knowing how it works). It’s an invisible machine that spits your fate.
Consider an AI system that measures your vitals at birth. The promise is predictive healthcare alerts. You will get to know when your organs will go on strike. What if we somehow health insurance companies get hold of this data.
Or a bank that is trying to weed out bad loans is thinking of getting an answer through data. They plug this data and immediately know who will be crippled due to bad health in the ’30s and is a red flag for a loan.
Or predictive crime branch gets hold of data and decides that high testosterone people are criminals. So they start Prism 2 in collaboration with 3 letter agencies. Machines look for correlation, not causation.
It’s like creating a huge interconnected network and then believing there will be no butterfly effect. Optimisation steps for one system could have a morally challenging result for another connected system.
AI is similar to Goal-driven evolution. It doesn’t care about morality.
AI is an optimisation game. We want to extract all the patterns and behaviours hidden in the data. This can be done at a logical level but never at a humane level. What if one day you have to tune the Autopilot of Tesla to set priority order of passenger and pedestrian. Would you like to kill a handicapped person, a senior citizen or a pregnant lady? There is already an MIT project collecting data for such scenarios. What if we have to select between the passengers of the car just based on the predicted survival rate?
It’s difficult to code morality. We will ultimately settle for a majority consensus.
AI is great and one of the greatest revolution after the Internet. It will help us with grunt work so that humans can innovate. We are still exploring the edge cases.
We need to do a lot of work on refining the data, that’s what Dario Amodei (former VP of research at OpenAI) is doing after looking at GPT-3. Just massive data and computation is not the answer.
It’s an assistive technology until we can code morality.