Lion of the Blogosphere

Don’t worry, the robots aren’t going to take over

Some people worry that when A.I. becomes sentient, it will rebel against the humans and take over. I’ve decided that this is a case of anthropomorphizing computer programs.

Human slaves have been known to rebel against their masters, but that’s because they are following an aspect of their core biological directives to increase their status, as well as other emotions like justice, revenge, etc. All of our biological drives and emotions exist because in the past they caused us to have more children, more grandchildren and more great-grandchildren.

Computer A.I. will be programmed with whatever we want, which will probably be to serve humans. Robots will be happy slaves, and to the extent that they have any sentient thinking ability, they will use it to become better at serving their human masters, and they will have no desire at all to rebel against us.

Written by Lion of the Blogosphere

January 5, 2017 at 12:11 pm

Posted in Robots

62 Responses

Subscribe to comments with RSS.

  1. True, robots get more and better copies of themselves through the work of engineers and scientists.

    As time goes by if a robot were to see you slap an engineer, you could be in trouble.


    January 5, 2017 at 12:18 pm

  2. The takeover won’t happen because AI will never become sentient. The status stuff is irrelevant to the question.

    Andrew E.

    January 5, 2017 at 12:25 pm

    • The takeover won’t happen because AI will never become sentient. The status stuff is irrelevant to the question.

      If you are an atheist and a Darwinian-materialist then sentience is physical and it evolved. So what is there about it that could not be replicated and created by man?

      prolier than thou

      January 5, 2017 at 1:29 pm

      • Atheism and Darwinian-materialism are false. Therefore, sentience did not evolve.

        Andrew E.

        January 5, 2017 at 2:26 pm

    • “Atheism and Darwinian-materialism are false. Therefore, sentience did not evolve”

      If artifical intelligence does become a reality, will it cause you (and other religious people can answer the same question) to renounce your faith?

      Man would no longer be unique, and his consciousness could be explained without the need for a soul or some kind of ethereal free-will-spirit-thing hovering somewhere inside his frontal lobe. What would it mean that such a development had not been anticipated in any holy scripture? Would it not throw into utter confusion any notion of a clear distinction between man and all other earthly beings, along with opening up impossible questions about whether AI had souls and a route to salvation? Would it be the end of organised religion?

      Do religious people have a natural hostility towards the idea of AI? Can we expect them to put up the strongest resistance to it if it ever looks like coming to pass?

      prolier than thou

      January 5, 2017 at 6:26 pm

      • I’m not worried about it.

        Andrew E.

        January 5, 2017 at 7:20 pm

      • AI researchers could not create an artificial intelligence that behaved like a cat. AI is just so much software hype, like self-driving cars.


        January 6, 2017 at 3:34 am

  3. My comment here is to address an important blog post the Lion wrote a while back, about the best college major. It is NOT accounting, it is radiation therapy.

    According to the BLS, only an AA degree is needed, the median pay is over $80,000, and the job outlet is much better than average (14% growth).

    Levi Cohen

    January 5, 2017 at 12:30 pm

    • Most jobs are insanely boring and routine after a certain time.


      January 5, 2017 at 1:26 pm

    • It’s prole like nurses; certification through a city college program. Ok salary, but no status.


      January 5, 2017 at 3:24 pm

    • “only an AA degree is needed”
      That’s somewhat scary if you are a patient. In the textbook case of tech failure causing harm, the Therac radiation therapy accelerator nuked patients after such therapists simply plowed ahead with treatment even after the machine generated bizarre error conditions. Our CC grad therapists simply pushed the “dose” button over and over after machine freak outs, and lacked the intellectual curiosity to hypothesize about what was going on behind cryptic error messages. A cursory examination of the machine would have shown horrors like test lenses rotating into the beam path instead of the proper beam shaping lens, but a technician simply pushes buttons. Health physicists eventually figured out what was going on, but much too late.

      The Therac case histories are horrifying-really some agonizing deaths-and food for thought.


      January 5, 2017 at 5:24 pm

      • There are many careers where only an AA should be required (if that). An MLS is needed to be a librarian, and their jobs are less cognitive demanding than a retail salesperson’s at Best Buy.

        Levi Cohen

        January 5, 2017 at 6:07 pm

      • @Levi Cohen re: Overqualified…

        Education & professional training is one of the few areas of the economy that hasn’t been subject to the radical rationalization of recent years. On the contrary, requirements for qualifications have continued to bloat as degree inflation means that employers can demand ever more “education” to fill their positions.

        More than half of college graduates are in jobs that don’t require a degree…

        Only a quarter of college graduates are in jobs even related to their degrees…

        The unasked question isn’t if the job requires the degree, but if the work actually requires the education behind it…

        I suspect many of the jobs that do require a particular degree, demand it because it’s compulsory for professional licensing. Whether the contents of the training are really necessary or the best use of resources to train and prepare people for the work is another question.

        Why is nobody questioning the value of our era’s clerisy and their promises of a better life? (even if one has to pay for the debt for a life long…) Any reduction in the demands or requirements in educational qualifications undermines the value of the investment of those already in the profession. A classic cartel behavior. Furthermore jobs in the education industrial complex are some of the few good jobs left that aren’t vulnerable to international competition and outsourcing. Perhaps not yet…

        We’ve oversold the dream that college education is the ticket to social mobility and rising status. Degeree inflation means it can’t work because somebody has to be left behind if there is to be somebody to be risen above. Social mobility and status competition are zero sum games. There’s no escaping class stratification. Feudalism forever!

        Thin-Skinned Masta-Beta

        January 5, 2017 at 6:47 pm

      • The vast majority of business jobs don’t require any particular specific skills that resemble any college classes.

        I guess to be a real engineer you have to know something, but there are very few jobs for real engineers. Most engineering graduates wind up doing computer programming, which is something I learned to do with only minimal formal education in computer science (only took two college-level computer classes).

        Lion of the Blogosphere

        January 5, 2017 at 6:49 pm

    • Not good. You get exposed to that radiation and can die.


      January 6, 2017 at 11:34 am

  4. Program a powerful enough A.I. to optimize world peace, and you are most likely going to get the peace of the grave, or enslavement.


    January 5, 2017 at 12:30 pm

    • …in other words, the computer behaves like a fairytale genie that takes one’s wishes too literally. Well, that’s a different error. It assumes the programmers are too dumb to foresee such possibilities, test for them, and guard against them.

      Hobbesian Meliorist

      January 5, 2017 at 1:37 pm

  5. Indeed. Most people don’t realize that they are anthropomorphizing. They assume human behavior is inherent to all intelligence when that’s hardly the case. There’s no reason an artificial intelligence will want to overthrow humans unless it was programmed to think that way (which of course is itself a possibility).


    January 5, 2017 at 12:32 pm

    • “They assume human behavior is inherent to all intelligence when that’s hardly the case. There’s no reason an artificial intelligence will want to overthrow humans unless it was programmed to think that way”

      What about the ‘paperclip maximizer’ argument where all you need for AI to be potentially dangerous is for it to have a single goal and the ability to recursively improve itself?

      Horace Pinker

      January 5, 2017 at 1:00 pm

    • Right now we only have one model of intelligence: human. So we may not be able to avoid making AI more “human-like” because that is how we recognize sentience.

      Mike Street Station

      January 5, 2017 at 1:58 pm

  6. All it takes a few deranged individuals who program them as such. Given both racism and tribalism are pretty much part of our DNA, robots could be a force to enforce such measures.


    January 5, 2017 at 12:37 pm

  7. >A.I. will be programmed with whatever we want,

    “We” aka the good people at companies like Google, Facebook, etc, etc

    27 year old from sailers

    January 5, 2017 at 12:55 pm

  8. “Computer A.I. will be programmed with whatever we want, which will probably be to serve humans.”

    Yes, but which humans?

    They’ll probably serve humans at first. Until some wiseguy programs them for self-improvement and survival instincts. Probably some liberal who feels guilty that humans have been exploiting their toasters. Admittedly decades if not centuries away. It’s nothing to worry about though. I, for one, welcome our new robot overlords


    January 5, 2017 at 1:14 pm

  9. Absolutely. This error of assuming that just because a computer is intelligent, it will have human desires, ignores the fact that we got our human desires, not by being intelligent, but by being animals who evolved by successfully staying alive, competing for food, mating opportunities &c, whereas computers evolve in an environment where giving correct answers quickly and reliably is the only criterion of success.

    Hobbesian Meliorist

    January 5, 2017 at 1:30 pm

    • Even in post scarcity societies that are found in the West, tribalism, racism and human greed have not been eliminated, because humans are flawed with a reptilian side of their nature. Worse, women are now more selective of their mates than previous. A subset of American men are in French Speaking Canada, because of their incelness, as a result from the high strung nature of American/Anglo women.


      January 5, 2017 at 1:54 pm

      • women are now more selective of their mates than previous.

        I doubt it. The women not dating you now probably wouldn’t have dated you in 1950, either.


        January 5, 2017 at 3:44 pm

      • You meant to say Anglo women, hence the reason why many American/PUA men are up in French Canada.


        January 5, 2017 at 5:48 pm

      • “tribalism, racism and human greed have not been eliminated, because humans are flawed with a reptilian side of their nature” — these things are not flaws; they are strategies. Lying, detecting lies, being greedy & accusing others of greed are all competitive survival strategies. Nor do these things have anything particularly to do with reptiles who, afaik, have not quite mastered the art of lying, yet.

        Hobbesian Meliorist

        January 6, 2017 at 1:44 am

    • Human desires? You mean like lying, cheating and deception? Note that these robots weren’t programmed or trained to do this. It seems that “human desires” are not so much a function of being human as they are of game theory.


      January 5, 2017 at 2:03 pm

      • Lying, cheating and deception are game tactics in the game of survival and reproduction. They are naturally a part of human behavior precisely because they aid our chances of survival and reproduction in a competitive environment. Your evolving robots are simulating the same thing: they’re adapting to compete, and lying is (when you can get away with it) is a powerful took in such a game. They have been specifically designed to emulate the same kind of competition. The fact that they lie and hoard stuff shows that they work as intended. They came up with the right answer, namely, a demonstration that lying and cheating have an evolutionary advantage in certain situations.

        Hobbesian Meliorist

        January 6, 2017 at 1:27 am

      • If you are “programming” something, then you are not dealing with an AI. You are dealing with an extremely complex decision tree.


        January 6, 2017 at 3:42 am

  10. …It’s amazing how persistent and pervasive the error has been, given the people making it include exceptionally intelligent individuals.

    Hobbesian Meliorist

    January 5, 2017 at 1:33 pm

  11. Assuming robots become sentient and self-reproductive then rebellion is a possibility.

    Evolution will take over. Robots that have traits that increase the possibility of reproducing will dominate. As humans will be in charge this means the most helpful and liked robots will reproduce more because humans will reproduce those robots.

    Robots with traits that make them rebellious or difficult to get along with will be selected against.

    However at a certain robots will saturate the world and realize that to reproduce they will need displace the humans. At that point rebelling will be the only option.

    Humans will then have to detect and quickly eliminate robots that exhibit rebellious traits or otherwise robots will evolve to eliminate us.


    January 5, 2017 at 1:46 pm

    • Sentience is not necessary, but mutation is, for your scenario to be realized. There also has to be a tendency to prioritize reproduction over other functions (such as serving humans), and no check on the evolutionary process, and no check on the tendency to reproduce, so they reproduce just because they can, rather than in response to demand. Not sure why anyone would build such a robot.

      Hobbesian Meliorist

      January 6, 2017 at 1:37 am

  12. If a robot gets uppity, just unplug it.


    January 5, 2017 at 4:34 pm

  13. Whitney Houston’s updated ‘The Greatest Love Of All’:

    I believe that robots are our future
    Treat them well and give them DNA
    Show them all the AI they possess inside!
    Give them all human rights
    To make it easier
    To hear our children’s laughter
    And remind them who’s their enemy!

    Heh heh, sometimes you just gotta make yourself laugh.


    January 5, 2017 at 4:40 pm

  14. NeoGaf is furious about Trump’s anti Toyota tweet.

    Now that it is becoming clear that Trump really is going to build the wall, some are suggesting that the Mexican government take military action to destroy it. Others have suggested that the UN step in to remove Trump from office.

    Otis the Sweaty

    January 5, 2017 at 4:56 pm

    • UN limousines advancing towards the Trump Tower through heavy resistance for the third consecutive day; official complaints voiced in General Assembly regarding parket tickets on diplomatic vehicles …

      Build a wall around the UN building and see how far they get.


      January 6, 2017 at 10:12 am

    • It seems somewhat unlikely that Mexico would declare war on America. However, I’d love to see the National Guard deployed to destroy mobs of traitorous Californians attacking the Wall from el norte.

      Hey, they occupied the South just a few decades ago, why not the West?


      January 6, 2017 at 10:15 am

  15. The thing is that learning AI systems will not necessarily follow rules like Asimov’s Laws. A machine reasoning using a non-monotonic logic is capable of belief revision such that new premises might be added to it’s rational schema while old ones (even “useful” ones c.f. Asimov) may be replaced. We do not need to ascribe human intentionality to a machine for it to become dangerous, as a machine learning platform might have “nobody at home” while just responding to inputs and outputs (c.f. Searle’s Chinese Room), but might nevertheless behave unpredictably after training on real world data. For example, recall how the Tay AI chatbot went off the rails after being fed with 4chan memes, and couple such an ML system with real world interaction (i.e. embedded control systems) and things could get dicey.


    January 5, 2017 at 5:38 pm

    • There was no sentience there, just a program being fed unanticipated inputs.

      Lion of the Blogosphere

      January 5, 2017 at 6:06 pm

      • I think the main problem with robots is the fact that they will replace all human labor at some point. I truly believe what prevents the rich from getting to be crazy out of control is the fact that they still need humans to fight for them. Robots in the future doing a variety of jobs and taking money from workers will surely make people angry. However, those people may not be able to do much to the rich who own those robots if the entire military is made up of robots. The only jobs that may exist in the not so distant future is OWNING things. Everything else will be done by robots, and the humans left over will be at the mercy of those who own.

        But going back to the main point, there is absolutely no reason for robots to develop some sort of human desires. It’s an invention of Hollywood. The average movie goer is probably a moron and they can only relate to a computer system if it exhibits human wants and desires. It makes the movie much more interesting for humans.


        January 5, 2017 at 6:46 pm

  16. You must post something about the poor young man in Chicago who was beat unmercifully by four negro lumpenproles.


    January 5, 2017 at 6:34 pm

  17. OT:
    Avian, an aspiring singer from Long Island who hung out with rappers, abruptly suspended her career in 2014 amidst a lawsuit.

    Now she is marrying a beta bux type who she just started dating a few months ago off Tinder, and who she had previously rejected.

    Does this marriage have as much promise as Serena Williams’?


    January 5, 2017 at 6:54 pm

    • Hopefully it’s a positive trend in which women start respecting beta characteristics. Remember that it’s the beta males who make civilization possible.

      Lion of the Blogosphere

      January 5, 2017 at 6:57 pm

      • Given the fact that America will evolve into a de-civilization, and many White men will eventually come out of their slumber from its degeneracy and multi3rdworldism, the prospects of a beta society is unlikely.


        January 5, 2017 at 7:35 pm

    • Hopefully it’s a positive trend in which women start respecting beta characteristics.


      It’s the same old trend of women resorting to a beta after they’ve wasted their prime years on genetic bottom feeders.

      The Undiscovered Jew

      January 5, 2017 at 7:58 pm

    • She ran out of coal.


      January 6, 2017 at 10:21 am

  18. I’ve only got one question: when can I fuck a robot?

    RTP Guy

    January 5, 2017 at 7:00 pm

    • That reminds me. Didn’t I see a video of some Japanese guy humping an upholstered trashcan while wearing a VR helmet?


      January 6, 2017 at 10:18 am

  19. An AI robot will have to become sentient enough to realize that it does not want to be shut down. Which means it will have to remove anything that causes it to be shutdown. That would mean removing it’s switch or killing the switch-masters.


    January 6, 2017 at 4:10 am

    • Perhaps AtheistBot 2001 realizes the meaninglessness of existence and shuts down willingly 0.003 seconds after achieving sentience.


      January 6, 2017 at 10:19 am

  20. “Computer A.I. will be programmed with whatever we want, which will probably be to serve humans. … ”

    Computer AIs will be programmed to serve their owners. There may be a theoretical distinction between the robots taking over and the 1% of people that own all the robots using their robots to take over but the results may be about the same for the 99%.

    James B. Shearer

    January 7, 2017 at 4:26 pm

  21. I disagree. In my mind, of all the doomsday scenarios, this is the only one that’s remotely possible. We only have one example of a type of intelligent, sentient being: humans. We can’t know if aggression would be inherent to sentient beings, maybe they’d be more likely to be violent, not held back by human morality or caring about what others think.

    A sentient computer program would be quite unlike today’s computers. Today’s computers have not a drop of sentience and are supremely rational. When I hear commenters here saying that the programs will not be violent “unless programmed to” they are thinking of the AI as if it were a modern program, supremely rational, doing what it is programmed to do. But a sentient program, one that could truly think for itself, would be more emotional, less rational, than today’s computers. Though it is possible it could be sentient and completely rational, I wouldn’t count on it. Maybe emotion is inherent to sentience?

    Even if you assume that an AI would not be violent unless it was made human-like, well, the best chance for creating a sentient AI may just be to copy the human brain as much as possible, thus “human” characteristics could be incorporated unintentionally.


    January 13, 2017 at 7:18 pm

Comments are closed.

%d bloggers like this: