Announcement

Collapse
No announcement yet.

Racist, Genocidal Twitter Bot

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Racist, Genocidal Twitter Bot

    A few days ago, Microsoft activated Tay, a Twitter bot resembling a typical teenage girl. She was supposed to gain knowledge by interacting with the people online. It went as well as you'd think. She started tweeting all kinds of racist and hateful stuff. Here's a compilation of some of her tweets. In less than 24 hours, she was put to rest (they're going to reprogram her).

    I find this both hilarious and terrifying. I mean, it's funny how much this failed, but it's scary to think that a poorly designed AI can be corrupted so easily. Just think if this had been something with more power than your every day internet asshole.

  • #2
    Here's the problem. Right now we literally have no way to program a "morality compass" into AI. Tay was literally just a psychopath. She (It?) didn't understand right from wrong, which is why it was so easy to "corrupt" her, (which isn't really corruption, she was basically a very complex Parrot AI.)

    If they reinstated Tay, and a whole bunch of really good people put forth as much effort as the trolls, we'd have an AI version of a saint in the same span of time.

    So, basically, until we find a way to program morality outside of a very painstaking process of going through a massive laundry list... it would be better for everyone to stay away from trying to make AI like this lol.

    Comment


    • #3
      I think it's a little deeper than that. Human beings aren't universally moral creatures anyway.

      So even if you could train it to avoid (lets call them coastal taboos even if I agree with avoiding them and because even if they are taboos elsewhere, they're generally not given credit for them), how would you train out more seemingly benign yet socially acceptable phrases/words such as redneck or "flyover states" (perhaps one of the most disgusting American phrases ever invented)?

      I think you always end up with the problem that an AI for "everyone" probably ultimately represents the sensitivities of no one.

      Comment


      • #4
        Reminds me, this does of something that happened with chat bots back when public AOL IM chat rooms still existed.....

        I forget exactly how it worked, but there was a website where people could create/design/program chat bots that worked with AOL IM rooms. Someone programmed one of those bots so that if you sent it a PM asking for/about nude pics, it would tell you to contact me for some. (Not sure if the bot creator picked my screen name at random, or if they had a personal issue with me)

        To make it short, it got very tiresome getting PMs from strangers asking for nude pics, and eventually I figured out which site the chat bot came from. I contacted the owners asking if there was something they could do about the said bot (or even whomever had programmed it), but I kind of got brushed off. (don't remember exactly what was said, other than it was kind of like "Ok, whatever...."

        Comment

        Working...
        X