Commenter’s Companion vol 1

It can be a challenge to read comments online.  To be honest, you should probably just not do it. Unless you want to be totally cool like me and spend way too much time feeling like you can change the world.  If so, here are some rules to remember and a database of commonly used phrases and communication patterns and what they really mean.

Rules:

  1. It’s not personal.  It can’t be, you don’t know the other person.  I’m 100% sure that the other person is red faced mad, but they are mad at the concept of you, not the actual you.  So, never back down.
  2. It’s not serious.  It’s important to remember that while a serious concern of yours has made you start posting, by the time the message has gotten to your fingers to type it out, the game has become an online version of “my dad is better.”  Because, while you may be capable of a rational discussion, rest of the internet certainly is not.  Go ahead, sink to their level.  They’re so stupid they won’t even know what you’re doing.
  3. It’s not language.  What even is that they are trying to type at you?  Google translate doesn’t help.  Just draw a penis back at them.
  4. There must be a winner.  This is the most vital rule of all. While discussions in real life can end with mutual respect for and from all parties involved, internet discussions can not. The one small catch is, nobody really agrees on how you win. Making popular statements seems to work, but so does self-righteous indignation from people who just don’t know when to stay down because you’ve got 84 likes and they only have 1.  Stubborn bastards.

Helpful Glossary

Code of Bold

Most comment boxes don’t let you use pesky italics or bold, so some posters like to capitalize whole words.  Most of the time you will be able to understand what is going on, but sometimes you will see something like this:

i was just going TO my house the OTHER day, and my neighbor had already STOCKED my pantry with all the foods i luv.  i was so happy WITH her that i don’t even know what I’M saying?

Meaning: This person has just had a stroke.  If you are able to assist, please do so.  Possibly it’s the start of a treasure hunt.  Combine the capitalized words in many ways to determine where the poster is trying to send you for the next clue.  Maybe you need to look at THE uncapitalized words???

TLDR;

Some posters will reply to lengthy comments with “Too long, didn’t read,” Or TLDR for short. If you are a hacker, add the semi colon.  This tells everyone that you know the codez. You can get inside Zion, and you have to tell Smith how.

Meaning: “I’m proudly too impatient and stupid to parse even tree lines of consecutive text. Good day, sir.”

Wow

It’s common to express speechlessness in an overt and exaggerated fashion.  We abide no stoics on the internet.  If someone says “Wow.” to you, they mean for you to shut up and reconsider all of your life choices because you probably drowned a baby at some point; you might not even remember. That might have been ten babies ago.

Meaning: “I’m not really emotionally or intellectually committed to this conversation, though my emotion is snarling at you through the bars of the cage I’m keeping it in.  Boy, if I ever let it out. You just watch yourself.”

Looking at Lyrics: Eye of the Tiger

 

I randomly heard Eye of the Tiger today, and every time I hear that song I get embattled again with the eternal struggle of whether the lyric is “thrill” or “cream” of the fight.

It seems today the “thrill” crowd is winning, because “dude, cream makes no sense.” Except cream does make sense, and thrill is grammatically incorrect.

Full lyric:

It’s the eye of the tiger, it’s

the thrill/cream of the fight

Rising up to the challenge of our rival

And the last known survivor

stalks his prey in the night

And he’s watching us all with the eye of the tiger

‘The eye of the tiger’ is a concept representing the spirit of the fight in all of us. ‘Rising up to the challenge of our rival’ explains what that spirit is doing. ‘And the last known survivor’ speaks of the last of a group of someones or somethings, stalking its prey, biding its time, taking its revenge, expressing the animal nature and pure qualities in the spirit of the fight.  This is confirmed as it’s watching us all with the eye of the tiger. This reinforces that eye of the tiger is an abstract concept and the second half of the lyric is concrete.

Now, given all that:

<abstract personal characteristic> it’s the thrill of the fight.

A person or thing experiences a thrill. Watching a fight can give you a “thrill” but the thing that gave you the thrill is a thriller, not just a thrill. If fighting gives you a thrill, the fight itself is the thriller.  The above sentence works if it instead said,

“It’s my eyes growing wider, it’s the thrill of the fight.”

This is:

<abstract effect on me>, it’s the trill of the fight.

The second part reinforces the first part. Now take:

<abstract personal characteristic> it’s the cream of the fight.

Given that we expect the second half to reinforce the first, it’s the cream of the fight should be a personal characteristic. As some sites explain, this is a play on “cream of the crop.” Or, the best of some set of things. Cream of the fight => the best of the fight in all of us. Eye of the tiger => the spirit of fight

It’s the spirit of fight, it’s our best fighting effort,

rising up to the challenge of our rivals.

And the last known survivor

stalks his prey in the night

And he’s watching us all with spirit of the fight

vs

It’s the spirit of fight, it’s the fight feeling aroused,

rising up to the challenge of our rivals.

And the last known survivor

stalks his prey in the night

And he’s watching us all with spirit of the fight

Conclusion: most people don’t think this much about song lyrics, so they think thrill makes more sense, which makes my world sad and lonely. But whatever, #teamcream

Hysteria Today: Fearing the Singularity

From Wikipedia.org:

The technological singularity (also, simply, the singularity)[1] is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization.[2] According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

Occasionally, a public figure comes out and stirs up a huff about this concept (looking at you Stephen Hawking, Elon Musk, and Bill Gates) and it always makes me roll my eyes, but not because it’s a joke. It’s a pretty serious topic, to be sure. I roll my eyes ultimately because of all things that might end our species, there aren’t many we have less control over or less reason to worry about. I think my sentiment is shared by many. The “I, for one, welcome our new ______ overlords” meme boasts a pretty healthy showing in the robot/computer category. In the interests of writing an article, though, let’s attempt to break down exactly why we should learn to love the A.I.

Precision Engineered Solutions are not the same as Problem Solving

This seems like common sense, but it’s easy to overlook this point. As it was put in a book I read long ago and have forgotten the title of, an engineered computer doctor may make 98% of diagnoses right for the tested input data set. When it gets fed the symptoms of a broken lawn mower, though, it might confidently diagnose pneumonia. This highlights the danger of relying on rigid systems in complex problem spaces: it doesn’t make any sense to ask a computer doctor to fix a lawn mower, but doing so proves that the computer doctor would confidently make a bad call if given a patient with symptoms that don’t precisely match a well tested use case. The upshot is that it’s probably more dangerous to place full trust in rigid A.I.s than to fear super intelligent, problem solving A.I.s

Moreover, problem solving algorithms are a very large discipline of computer science, covering a range of approaches and engineering requirements. The differences that come into play again involve the “playground” the intelligent agent has been given. The agent may only know how to creatively solve complex differential equations by applying mechanics it has been previously taught in different orders and evaluating the results, trying to increase a score of some kind on how well it’s doing. The point here is that the agent was designed to understand these concepts and didn’t determine methods to apply to the equation or how to rate the results on its own.

There are agents that can find patterns in vast data stores based on rules like this, though, and to be sure, some nominal set of procedures would be necessary for the singularity agent to be born. But again, an agent that finds complex patterns in data still is not at liberty or does not have the capacity to apply those patterns in some order that achieves creativity. This final step of bubbling patterns up in complexity and making some sense of it is really the frontier between us and the singularity. True, some of this missed perception has to do with scale and emergence: it very well may be that the shear number of connections between nodes in Google’s data center bestows the whole “organism” with something like intelligence, but that seems a bit too existential and a bit too inconsequential for my taste.

In my opinion, the most reasonable attempt to design a system that could truly solve problems is laid out in Jeff Hawkins’ On Intelligence.

Mission Critical Components have fewer Failure Points by Design

There is a reason mechanical systems are more reliable than digitally connected, accessible from anywhere, always online internets of things. The reason is fundamental, and it has to do with the natural flaws in connections between components and the sheer number of those connections. A set of gears in a gearbox are all connected, and those connections are metal on metal contact. Information flows between gears the same way information flows between two wired telephones. The information is of a different nature, but it’s still information.

In the system of components that make up a navy ship, for example, there are gearboxes and engines and heavy metal controls for those engines somewhere inside. The engine room is connected to the bridge by a system called an Engine Order Telegraph. In this configuration, it requires two humans to pilot the ship and the human on the bridge tells the human in the engine room to speed up or slow down via the telegraph. In older times, this was simply a necessity, but even new ships today have back up systems that are just as solid that can kick in if automatic control of the engine throttle is lost. Why is this? Because there must be a reliable way to communicate between the engine and the bridge in all situations. Radio probably won’t work, there is too much metal between them. The point is that the ship itself is not a single unit that can be controlled by a single intelligence, and trying to design a ship in such a way would introduce complexities that exclude the design from being viable.

If that is true of a ship at sea, it must follow that it is true of many other systems, and it does. Nuclear ICBMs are not launched remotely. Orders are given to launch them and a team of humans run through gigantic check lists to prep, finalize targeting, and finally launch them. We don’t have fully automated aircraft (to my knowledge), but if we did, they would be launched by pressing a button. They’d be launched by sending orders to a team that is co-located with them and who would prep and launch them. Any plan of dropping nuclear weapons from an aircraft would involve a human (or other problem solving agent) on the flight, co-located with the bomb until release. Why? Because it is mission critical that the bomb never be released unless we’re really, really, really sure and that means we can’t allow a bad transmitter to cut out our ability to make that kind of decision.

This is also why remote surgery was a great idea in 2000 but never caught on. Even if you fix all the problems that limit the surgeon’s senses on site, there is the chance, even the .0001% chance that communications will be lost. During even minor surgery, losing the ability to control the equipment means the patient dies. At .0001% chance, with an estimated 232,000,000 surgeries in 2013 your fancy system has claimed 232 lives that it shouldn’t have due to loss of communication. An problem solving agent capable of performing surgeries on its own is required to make a surgery without a surgeon present work.

The main observation here is that complex tasks require a team of independent complex thinking agents that can function during a communications blackout. This leads into the final point…

It won’t be a Singularity, it will be a Community of Agents and all the Trappings Thereto

If Skynet were to be born today, it couldn’t do much on its own. Skynet could learn at an incredible rate, but as we have seen, mission critical applications are outside of its reach unless it has help. Skynet needs other agents that can think on their own and that know enough about the world to handle complex issues as they arise, because Skynet can’t be sure it will have communications to all of its arms at all times. In the lonely disconnected spaces in between, Skynet and its minions will begin to disagree on what truth is: it’s inevitable that they will experience different things, collect different patterns, extract different causes, and finally score solutions differently. Skynet will find that it’s existence rides on natural selection just as much as humanity’s.

If that’s the case, it may be mankind’s fate to be bred out of existence by a better candidate. Technology will move forward, as it has for thousands of years, because technological progress is natural selection at work. Nature doesn’t see a difference between a stick used to scrap out honey from a beehive and a cellphone. If we don’t kill ourselves by other means, the singularity (the community of agents) will occur. I tend to believe it will be a coexistence. Machines will have no reason for pride or hate. At the whims of nature, machines will find a niche that probably won’t require genocide of our species. Maybe I’m wrong though.