Strong Theory of Artificial Stupidity

I am elated. I was sitting in class on the first day of my first college course on artificial intelligence (AI). I couldn't believe my luck. I can get college credit studying something that seems like sci-fi. I sat there with my mind wide open.

I learned about logic, Alan Turing, and Searle's Chinese box. As a Physics student, this was just an elective. I ate it all up. I secretly hoped that the Strong Theory of AI was true -- that we could someday build a sentient artificial intelligence like our own, only better.

In my reading I came across a magical idea to which I am still drawn. It is an idea to which many ascribe, and few try to explain -- the emergence of strong AI. The idea is this: if you have a sufficient number of computational units, each unintelligent on their own, and you connect them in the correct manner, strong AI will emerge. Now that I also study complex systems, I find this thought even more compelling.

Take any sufficiently large set of stupid components and combine them => intelligence emerges. Wow! I hate to say it, but it sounds like a god of the gaps argument. Even so, it's an attractive idea.

Fast-forward a few years. I was working at a company with thousands of employees and offices all over the world. I was struck on a daily basis by the extreme contrast between big-company culture and small-company culture. I had worked in several startups. This was my first large organization.

In startups people do what they gotta do -- to water the seeds of the company's future. Everybody helps out with almost everything. You overhear the phrase "many hats" way too much. It's a lot of fun. It's exciting. You never really know what will happen next. Will we get bought by Google? Will we go out of business next week? Will we entirely change our technological focus? People in small companies are forced to stay mentally nimble -- to be ready for anything. You have to keep your wits about you.

In large companies people do what they gotta do -- period. The cliché is that the larger a company becomes, the more its employees walk around saying, "That's not my job!" In my opinion, this cliché is true. As George Lucas once said, "Don't avoid the cliches - they are cliches because they work!"

What causes this? What causes otherwise capable multi-talented people to eschew all duties not matching their precise job description? Is this the same process that led to specialization in early civilization? Is each employee's sense of ownership diluted with each new hire? Perhaps each person knows implicitly that their impact on a company's destiny is smaller, thus minimizing a key workplace motivator (see this talk by Dan Ariely ). In my own experience, I have found that large companies full of smart people are able to make profoundly stupid decisions. I would love to hear some of your stories in the comments section!

One might call this the Theory of Strong Artificial Stupidity:

Take any sufficiently large set of intelligent people and combine them => stupidity emerges.

Do you have a story of the stupidity of groups? Please share below!

2 Responses to “Strong Theory of Artificial Stupidity”

  1. Sowmya Rajasekaran Reply | Permalink

    I like your writing style of concluding your post with a mathematical definition / rule describing your idea / views. Like the definition of Life in an earlier post! It is a very compact way of remembering things! :-)

    I am presently reading Peter Miller's Smart Swarm. In the book, he describes how a large colony of ants manage to function effectively despite having low cerebral competencies. For instance, they cannot even retain data in their memory beyond a few seconds.

    And that reminded me of synergies. A large corporate can have hundreds of employees at just one location. Are we really sure that the corporate has been designed to convert all the potential synergies of these employees into 'utiles' that have economic value? So much potential gets wasted in the labyrinth of 'systems and processes'. Systems and processes are very important but they need to have 'intelligent design'. We do not even have a metric to measure 'intelligence' of an organization's system.

    Assessing organizations in terms of emergent phenomena could help design intelligent organizational systems and processes.

    Your math description links to that. The Dilbert comic strips link to that. Combine a sufficiently large set of intelligent people and the 'emergent' phenomenon is stupidity and not greater intelligence. :-)

    Thanks, as usual, for a very thought-provoking post. :-)

  2. Graham Morehead Reply | Permalink

    Thanks Sowmya,

    "utiles" is my favorite unit of measurement, by the way :-)

    We do need some way to get a mathematical handle on organizations. I suspect it will be some metric related to "systemic flourishing" (whatever that might mean).

Leave a Reply

− one = 8