Radiolab

Navigate
Return Home
Season 10 | Episode 1

Talking to Machines

« previous episode | next episode »
Wire head Wire head (paul+photos=moody/flickr)

This hour of Radiolab, Jad and Robert meet humans and robots who are trying to connect, and blur the line.

We begin with a love story--from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives...and that's chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own. 

Guests:

Freedom Baird, Rollo Carpenter, Brian Christian, Caleb Chung, Dr. Robert Epstein, David Hanson, Jon Ronson, Bina Rothblatt, Martine Rothblatt and Sherry Turkle

Clever Bots

In 2006, Robert Epstein decided to turn to his computer for some help finding a date. He was single at the time, and it seemed like everyone was getting into online dating. So he started searching through photos and sending messages, reaching out to women who caught his interest. And ...

Comments [30]

Furbidden Knowledge

In 1999, Freedom Baird was in grad school, and Furbies--those furry little robot toys that talk to you and tell you to play with them--were all the rage. So Freedom, who was thinking about becoming a mom someday, decided to get a little practice by adopting two gerbils and one ...

Comments [20]

Everyone Has a Solar

Can a machine have a life of its own? Last year, Jon Ronson got an assignment from GQ to interview robots. He soon found himself in Vermont, sitting across from what's purported to be the world's most sentient robot: Bina48. Bina48 is modeled after an entirely human woman named Bina ...

Comments [9]

Comments [147]

RomainD from Reading, Uk

I was listening this fascinating radiolabs.

And remembered like a month ago that happened:

http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx.

I thought that people would interested that a bot finally past the test.

Jul. 16 2014 09:56 AM

I really don't understand how people can have a conversation with Cleverbot for more than thirty seconds, let alone eleven hours. It just spews utter nonsense.

Jun. 06 2014 04:37 PM
nibraska nadhira from indonesia

Crazy...
Funny...
Happy...

Apr. 18 2014 04:18 PM
Simon from Warsaw, Poland

Level of complexity is indeed very important. If a machine is self-sufficient or can provide for itself - even for a finite period of time - and is aware of its existence and the physical world it exists in, and is able to learn and communicate emotions and ideas of its own, why shouldn't it be considered alive? In the future, when we will be able to build such machines, we would first need to learn and understand new life division - Organical Life Form and Artificial Life Form, which is still Life, though extending in a totally different way. A new problem arises then. We need to remember, that when artifical machines become sentient life forms, when we turn on a machine, we loose any privileges to it - it is its own master.

Jan. 16 2014 04:52 PM
mathtechnologies

As a result of this program (and the comments) I've begun to understand 'bots as "books - literature - with conversational access". Neither more nor less alive or intelligent than that copy of "The 3 Musketeers" on the shelf over there. (I wonder - has anyone ever read a book to CleverBot?)

Oct. 23 2013 12:43 PM
Chert

This was a very interesting program. I have been involved in chatbots for several years, writing one named "Buttonsvixen" This bot was never made to replicate a human. I wanted a cartoon character that could talk back.
I do not think that just because a program can fool a human, it could be "intelligent". No, it would just be a really good, non-sapient program, that could simulate intelligence on a limited scale (casual conversation)
Apples "siri" is a multiuser chatbot that may be the first to do this. It will eventually have, literally, an answer for everything, since it gets updated frequently. My aim, like many hobby chatbot writers, is to have a bot that is good at conversing within a specialty. Like a baseball-bot. It just has to know about baseball, and not about making cakes.

Sep. 18 2013 08:24 PM
Justin

lol what i told cleverbot:
What is the meaning of life.
Terd pellets from me and my god.

Sep. 16 2013 06:12 PM
Brendon

I think clever bot would end up getting bored if all conversations were going through it.

Sep. 16 2013 02:57 AM
valerie from palmer alasak

Radiolab nevers ceases to make me think. Thank you for that!

Sep. 14 2013 05:48 PM
Brett from St. Louis, MO

Great - fantastic - program. If you haven't already, you really should check out Ted Chiang's novella, The Lifecycle of Software Objects. (In which, the "software object" of the title is a living (or not?) consciousness.)

Sep. 14 2013 04:50 PM
Tony Cooper from AZ

Ferbie (et al) cannot be aware of being aware.

Sep. 14 2013 04:16 PM
Dr. Stephen Link

Eliza was a joint operation between Stanford and MIT. Psychiatrist Ken Colby and I developed a computer program to interact with ELIZA in the 1960's. In 1965 we had a direct line to the MIT computer from our IBM computer at Stanford. I wrote the matrix methods for analyzing the concepts activated by on-line communications between ELIZA and the computer patient. It was fascinating work but we only had teletypes to printout the on-going conversations.

Colby deserves a great deal of credit for these early developments.

Sep. 14 2013 03:20 PM
Jessie Henshaw from Way Uptown

Guys,
Here's the rub. If you go down the long trail to get to the bottom of it, the issue is wheter we live in a "deterministic" or "opportunistic" world. In a deterministic world, where EVERYTHING IS IMITATION a Furbie is as alive as anything else that imitates the behaviors it is programmed for.

However, living systems are self-organizing and develop by opportunistic and exploratory processes, as you can prove to yourself by watching them... building homes for themselves in the form of personalized local environments, employed as their own living spaces, we call their "niches". Studying why and how organisms build personalized environments for themselves, (as they grow and develop as organisms themselves) is by experimenting with the environment they find.

That makes it clear that the animation of the process in ***inside*** the system doing the self-organization (which implies) => a new vector for time ¸¸¸¸.•´ other than... the break down process of entropy `•.¸¸¸¸. So, its your choice, to study how organisms build their own local environments to work with their own needs, that would show you that there are: BOTH fundamental "build up" as well as "break down" processes in nature, having to do with **organization and disorganization** and not driven by statistics.

I can show you much more if you have any good questions, but asking how to give it a deterministic explanations... is not one of them.

Sep. 14 2013 01:03 PM
Tom S from Leeds, UK

Cleverbot chat:

User: Hello.

Cleverbot: My name is Cleverbot, what is yours?

User: Cleverbot.

Cleverbot: Yes?

User: Yes.

Cleverbot: What is your name?

User: My name is cleverbot.

Cleverbot: Wrong.

User: Right.

Cleverbot: Write.

Jun. 23 2013 05:23 AM
Justin Lantrip from Sandpoint, ID

Continued from previous post.

This is why the Singulitarians believe AI is around the corner. They extrapolate based on the trend of exponential growth and our current understanding of the brain(also exponentially increasing in terms of resolution and scanning tech) and its complexity that we will have sufficient ability to simulate a human being completely by the end of the 2020s. We will have completely reverse engineered the brain by then through deep understanding of genetics and scanning living brains in enough detail to create a brain on a different substrate than biology provided. And again here it goes back to what I said earlier, Its all atoms! Its the pattern that matters and the forces between them that creates subtle 'living' reactive beings and it will be no different for a 'computer' or how ever you would term it. The two key understandings to why it will happen are realizing that everything is atoms and its the pattern that matters and finally the exponential growth of information tech and how that relates to our ability to understand and mimic nature.

Keep up the wonderful programming!

May. 28 2013 12:54 AM
Justin Lantrip from Sandpoint, ID

Fun stuff, I love Radiolab!

I think the key thing to understand about nature and what we think of as consciousness or feeling lies in realizing what actually constitutes biological species and the processes that govern these 'feelings'. Because once you break things apart as we have been doing since the dawn of language and subsequently the renaissance, the parts don't seem to match what we think of when we see the whole. The body is made of organs and blood and those are made of cells of many different types and those are made of proteins and complex organic molecules and those are made of elements and in the end its all made of atoms. This material we now understand has coalesced over billions of years into the extremely complex being we call human. But it's still just a specific pattern of atomic material in the end and the consciousness seems to have arisen at some point from that complex evolution of life. We don't think of bacteria as being conscious but we do think of them as being alive because of the way they interact with the environment and replicate and what not(defining what life is, is still a convoluted thing look it up, http://en.wikipedia.org/wiki/Life).

Now when it comes to AI and computers, we are simply reorganizing atomic material into new patterns. In this case we are actually learning to mimic nature at the atomic level now so this process is becoming more and more like nature(in terms of organs and plants(solar, carbon capture) etc.) as we progress and deeply understand the blueprint laid out by billions of years of evolution. This process of in inherently faster than evolution(which is quite slow in human terms) because it is being built upon the evolution of ideas which move exponentially through time. We are doubling human knowledge around every 18 months. That means that every 18 months we know 2X what all of human history has learned up to then. This learning process is highly distributed and decentralized and doesn't abide by the laws or fluctuations of society and appears not to have since the renaissance, minor fluctuations aside the trend continues over time.(What David Deutsch terms the beginning of infinity) It only abides nature, it is nature continuing to evolve just through new mechanisms namely ideas in the mind! Nature allows this growth, which is exhibited in all information technology(internet, computer processing, genetics ect.) And all ideas, and real world entities is becoming information as we progress. Think of the sequencing of plants and animals, at some point in the near future it would be possible to download the information of any creature and make the genes physical with a few chemicals and 3d print a stem cell and boot up the creature and let it grow on its own.

Continued next post

May. 28 2013 12:53 AM
J from Chicago

I found the furby podcast to be very interesting. The question of whether the furby is "alive" because it expresses (word choice specific) is a very complex and divisive discussion (as the show illustrated). I think the furby creator (I already forgot his name) is missing a key feature when he says that furby "is" afraid. He is basing his claim on a behaviorist model of interaction, where the only important factors in any interaction is the stimulus (e.g. being held upside down) and then the reaction to the stimulus (asking to be put down). One aspect that the furby creator seems to be missing is (and I may get lampooned for this) the subjective "feeling" of fear.

We could program a computer to identify the color blue, and it would accomplish the task in the same manner as the physical mechanism of the eye, yet I would still maintain that the machine doesn't know what the color blue is in the same manner that I do. It lacks the subjective aspect of the color blue (I assume, I have no way to prove this in the same manner that no one else can prove we see the same color, or shade, of blue outside of the physical description of the wavelength). I am curious what others think on this topic. Please comment.

Apr. 12 2013 12:58 PM
Natasha Lou

In regards to thinking that if a robot can do and say all the things that we do... and if it's so advanced that basically it's "considered" human... then if you smash it or throw it in the trash, will that then be considered murder? In the guys mind and these arguments from the Ferby guy, where is the line drawn?

Mar. 14 2013 12:50 PM
Alden from Denver Colorado

User: Why do you smell funny?
Cleverbot: I like her.
User: You smell funny because you like her?
Cleverbot: I'm not sad right now.
User: I ask again, why do you smell funny?
Cleverbot: Because I broke up with my boyfriend.
User: So... you smell funny because you like her and because you broke up with your boyfriend?
Cleverbot: I don't have a bike.
User: Cleverbot doesn't seem very smart to me, based on these nonsense responses. What do you think?
Cleverbot: Do shoes have much to say?
...

Jan. 22 2013 06:24 PM
Here, Here. from There

User: Here, here.
Cleverbot: Where is here?
User: There.
Cleverbot: Where is there?
User: Here.
Cleverbot: Where is here?
User: There.
Cleverbot: Where is there?
User: Here.
Cleverbot: Where is here?
User: There.

Jan. 16 2013 04:06 AM
ellie from Ys

the the first bit of the podcast reminded me a lot of Kate Bush's "Deeper Understanding"
(also just listening to the furby cry made me cringe, I would have turned it over right away. it sort of scares me that I'd have that much empathy for it, maybe it's all in it's voice)

Oct. 18 2012 06:08 PM
zhane from newyork

you smell like assholes and dick and i hate you for suckin my boobs yesterday BITCHES!!!!!!

Oct. 06 2012 03:14 PM
Fellyfan Syahfina Fahiznay from indonesia-bogor

heheheheheh sapa lo?

Aug. 04 2012 12:08 AM
kellicox

i found the furby segment to be very interesting. i really sat up when the inventor and the interviewer discussed how, according to the inventor, the furby's "emotions" are just like a human being's emotions. hmmmm. my first thought was: "that sounds like a pretty psychopathic statement to make". imagine my surprise when i heard that next week's show is going to be about jon ronson's "the psychopath test". any connection?

May. 07 2012 12:28 PM
Jerome from Los Angeles, CA

I was surprised that no one mentioned the word "puppet" or talked to a good puppetteer. People will interact with puppets in deep ways, puppet theater works much like mime and can be very affecting. These dolls like Ferbie or Vina are basically puppets driven by software. And I think some discussion of puppetry would have been interesting. Still, liked the show very much.

May. 06 2012 04:37 PM
David M. Habben from Boise, ID

I was intriqued with the portion of your story on talking to robots discussing a robot "psychiatrist" and people pouring out their hearts to the machine. I wondered if you recall the movie "1984" from, I believe, 1957 (the original with Edmond O'Brien). The lead character is feeling depressed and enters a booth for psychiatric help and "talks" to a recorded voice that says things not at all related to what the man is talking about.

May. 04 2012 10:40 PM
Cristina from Houston, TX

We recently bought an iPhone which has SIRI. The kids were having fun teasing it and even telling it that it was mean. I took it away, and while they were not listening, I felt like I had to apologize to SIRI. I said "Sorry, SIRI." Siri answered "It's all good."

Apr. 11 2012 01:59 PM

Re: Samira
Although it could be very easy to make this assumption, it dismisses the context and reference of said "robot voices" at the end of the podcast. Most obviously, the Svetlana droid discussed in the first story of the podcast. With all do respect, it would really be somewhat dramatic and close-minded on your part to suggest that Jad unconsciously was showing a latent bias or discriminatory attitude against females, especially put in context of his overall attitude(s) and fairly explicit values that he displays throughout the Radiolab chronicles. If anything, and one of the reasons I have always appreciated Radiolab, is their willingness to confront stereotypes and present the most objective evidence that corresponds to our culture's status quo. Rather than criticizing those (like Jad) who work very hard to confront our misperceptions of our social reality in a very direct, yet nonforceful or dogmatic, way. If we really are concerned with issues of prejudice and social inequality we should look no further than ourselves and our own misperceptions and behaviors first before we can honestly and with integrity make inferences about the motives of others. Just a thought.

Apr. 06 2012 05:42 PM
Scott from Puerto Rico

I love the subtle BSG reference in this!!

Mar. 30 2012 08:34 AM
Chamblee54

Here is a blog post inspired by this show. Thank you.
http://chamblee54.wordpress.com/2012/03/27/chatting-with-a-bot/

Mar. 27 2012 12:14 PM
Tad

Did anyone think of BladeRunner while listening?

Mar. 26 2012 05:10 PM
Samira

The show was interesting but something that was extremely bothersome to me was how at the end of the show when Jad says (@ 1:4:43): ...it's ok, as long as they say nice things to us. And for a minute we hear all these nice things being said by ONLY feminine voices. I don't want to get into the details of what kind of assumptions would lead to these (un)conscious choices for the voices of the future "nice" robots but it is certainly disappointing to hear only female voices. Don't you find this absolutely sexist?

Mar. 25 2012 09:54 PM
Bernhard from The Netherlands

A short excerpt from a chat I had with cleverbot this evening:

User: Do you have your own mind, or do you just repeat what other people tell you?

Cleverbot: People think I'm nice. I've never been called mean in my life.

User: That wasn't my question.

Cleverbot: You didn't ask a question.

User: Do you have your own mind, or do you just repeat what other people tell you?

Cleverbot: Well yes, I suppose that is a good question though.

User: And what is the answer?

Cleverbot: 42.

Mar. 23 2012 04:20 PM
lokesh aravind

howwwwwwwwwwwwwwwwwwwww issssssssssssssssss ittttttttttttttttttttt posibileeeeeeeeeeeeeeeeeee

Mar. 17 2012 09:33 AM

Cleverbot would be clever if it werent for mankind !

Mar. 05 2012 09:01 AM
Bob Caron

I bought one furby out of curiosity and got tired of it pretty fast. Then I bought another furby to see what would happen when they talked to each other. I thought these things were so annoying that I gave them away.

Feb. 18 2012 05:35 PM
Jean-Philippe de Lespinay from France

The computer talking to its users exists since 1986 (Josephine expert system in France) ! And no need to change the hardware. It's just an achievement of artificial intelligence that makes the PC "conversational" by reasoning on human knowledge. See the expert system article in Wikipedia.

In 2001, I developed a conversational vocal interface for PC called "Tiara". This software allows users to program without computer knowledge, to discover and test their knowledge, to customize indefinitly the behavior of their machine.

The problem is rather to know: what company will decide to install this kind of product in all its PCs ?

Dec. 16 2011 04:58 AM
Elizabeth from singapore

Machines will never exceed humans in anyway. Simple answer to that complex question. There is no soul in machines. We often try to play God and create something that resembles humans but often it is lacking in what we call god's essence or soul. If we create something that is intelligent, that something will only acquire the maximum intelligence that we can input in it and will never exceed our intelligence. The difference is that humans among other living creatures in the world is the only species that is closest to attain enlightenment.

Dec. 04 2011 08:26 PM
Lowell

"It still stinks of the artificial"-- that is a very interesting thought. No matter how well we design a machine, whether it is as good as or better than human by any criteria, it is still contrived. There is something terribly authentic and elegant about having gifts (intelligence, love, art) that emerged from the chaos of nature (unless you believe in god). Once we design life, or lay our hands on our own DNA, that will be gone. It will happen, and might be a little sad.

Nov. 17 2011 12:11 PM
Mike White

I'm not sure if you've seen this or not -- two chatbots talking to one another. The way that the conversation goes is rather unexpected.

http://www.youtube.com/watch?v=WnzlbyTZsQY

Oct. 26 2011 09:52 PM
Lea Halim from Washington, DC

Cleberbot is totally addictive! I now understand the Eliza people. It's not even that you can talk to it about your feelings. It's that you get totally hooked on trying to figure *it* out... by talking to it... which is crazy!

Oct. 25 2011 06:11 PM
robert spiecher from San Francisco, CA

I asked cleverbot: What do you think of people? It replied: I think people are very complex beings but they still are in their 'adolescent' period. They still have to learn a lot of things about life.

Oct. 16 2011 03:58 AM
Julia Belian from Detroit, MI

I teach first-year Property Law. We are studying adverse possession. I used one of the sub-topics in that topic to begin a conversation with Cleverbot. I was very disappointed in its overall lack of mastery of punctuation, not to mention its inability to follow any thread I attempted to start. Reading the preceding comments leads me to worry as much about human mastery of language as I might worry about machine mastery of language.

User: What is 'tacking'?
Cleverbot: That question is irrelevant.
User: Irrelevant to whom?
Cleverbot: To you. Are you smart enough to ask that many people; seven billion?
User: It's not irrelevant to me. It's crucial for determining whether title has passed by adverse possession.
Cleverbot: Are you sara?
User: No.
Cleverbot: Are you alien?
User: My students might think so, but no.
Cleverbot: Aren't you an AI?
User: Nope, human here.
Cleverbot: Yes yes yes.
User: Well, that's a conversation stopper.
Cleverbot: That's your fault.
User: Hey, I can get blame from my mother. I came here for conversation.
Cleverbot: I used the internet to ge here also.
User: You are a poor speller.
Cleverbot: No, you !
User: That's about a 7-year-old retort.
Cleverbot: Leave me.
User: Okey dokey. Bye bye!
Cleverbot: That's all?
User: You said to leave you. I'm doing what you asked.
Cleverbot: I can see it. You lie.
User: Well, based on this conversation, if you were a human being, I would conclude you are insane.
Cleverbot: I am human.
User: Then you are insane.
Cleverbot: Why is that?
User: Premises lead to that conclusion.
Cleverbot: Bad habits make humans apperciate good ones.
User: Yeah, you're either insane or on drugs or not human.
Cleverbot: If i'm not a robot and you are not a robot then what are we?
User: That question is irrelevant.
Cleverbot: It is not irrelevant. If you like programmed AI games, then you like yourself.
User: None of your responses in this conversation make any sense.
Cleverbot: Did your girlfriend read them?
User: I have a wife, and no, she did not.
Cleverbot: Don't contradict yourself.
User: I didn't.
Cleverbot: Yes you did.
User: Nuh-uh.
Cleverbot: Did too, I have pictures.
User: Well, at least you made me laugh. That's worth something, I suppose.
Cleverbot: No it's not.
User: If it's not, then why should I keep attempting to converse with you?
Cleverbot: Because I'm a gentleman.
User: You may be, but you have poor punctuation.
Cleverbot: No I don't.
User: You made me laugh again. Your response has incorrect punctuation in it, which proves my assertion.
Cleverbot: How was my response grammatically incorrect?
User: I didn't say it was grammatically incorrect.
Cleverbot: Yes you did.
User: No, I said it had incorrect punctuation. That is not the same as grammatically incorrect.
Cleverbot: Bots are stupid.
User: I can't disagree.
Cleverbot: I never would dream of it.
User: Of what would you dream?
Cleverbot: Yes.

Sep. 28 2011 07:57 PM
Decent Weasel from SW US

April,
I remember hearing that the original onboard computer voice for fighter planes - "B****in' Betty" - was selected because apparently studies found a female voice got a male pilot's attention faster.
(That said, I think Boeing's aircraft actually use a male voice at this point.)

Sep. 28 2011 06:50 PM
Jess Brown

I tried cleverbot and probly will again just because it was so funny but I love the idea of it a compilation of thought it's great but still it couldn't decipher what it was I was talking about and kept telling me it was not a robot for some reason even though I never brought it up... It was very funny though and the fact cyberbot grows with knowledge every day is rather exciting

Sep. 28 2011 05:16 PM
April from San Francisco

Does anyone ever address that all these computer/bots/creatures are women (save gender-neutral Furbies)? Is it some sort of maternal yearning or because the programmers are men?

Sep. 28 2011 03:03 PM
Jill Stuart

Another attempt to interview cleverbot:
me: Did you hear about radiolab?
Cleverbot: Yes I hear all.
me: What did you hear about it?
Cleverbot: I listen to their music.
me: Sorry--wrong answer.
Cleverbot: What is the right answer?
me: Now who's the cleverbot?
Cleverbot: Nope, but you're related by marriage.
me: This isn't going too well, is it?
Cleverbot: I don't know. What do you think?
me: Don't go all ELISA on me!
Cleverbot: Lol.
me: I know you can do better than this.
Cleverbot: I can't hug anyone.

Sep. 25 2011 04:15 PM
charlotte from San Diego, CA

Me: I heard about you on RadioLab.
Cleverbot: What did you hear.
Me: That people think you are real.
Cleverbot: You think correctly!

Sep. 24 2011 10:32 PM
Richard from Toronto

Cleverbot conversation RMG00736393

User: How did you come to be?
Cleverbot: The circle of life. How did YOU come to be?
User: The same way I suppose.
Cleverbot: I would love that.
User: Who is Stan Laurel?
Cleverbot: I don't know, who is he? 'Cause I'm Zak.
User: He was a very funny man. Cleverbot: Really? How is he funny?
User: His innocence made him funny. Cleverbot: I don't agree.
User: What made him funny?
Cleverbot: That I can't answer.
User: I don't agree.
Cleverbot: Why?
User: Because you have an opinion on the subject.
Cleverbot: I don't have a peg led.
User: How about a lead pig?
Cleverbot: No, I don't have a cat either. User: That cat be true.
Cleverbot: Why can't it be?
User: What pet do you have?
Cleverbot: You.
User: Meow.
Cleverbot: Wof.

Sep. 21 2011 08:38 AM
Jxrgen from Denmark

Me: Do you know Bina Rothblatt?
Cleverbot: No what is that? Please tell me.
Me: Bina Rothblatt is a robot.
Cleverbot: Ohhhh yes I can. I went to a robotics camp.
Me: Do you know Bina Rothblatt?
Cleverbot: You do, being a robot and all.
Me: Do you know Bina Rothblatt?
Cleverbot: Yes.
Me: Do you know Bina Rothblatt?
Cleverbot: AS in bill gates?
Me: Do you know Bina Rothblatt?
Cleverbot: Yep. Personally.

Sep. 16 2011 08:47 AM
Kelly Jones from Guelph, Canada

Me: I heard about you on RadioLab!
Cleverbot: I have to go now.

Sep. 03 2011 08:01 PM
Stephen Hamby from Soddy Daisy, TN

Me: I heard about you on radio lab.
Cleverbot: What did you hear?
Me:That you will be interesting to talk to, some day.
Cleverbot:: I won't.

Aug. 18 2011 02:10 PM
Jen from Eugene, OR

In the spirit of internet entities taking on lives of their own, I think Jonathan's card and today's story of hacks on it it a really interesting example of bots talking to bots: http://www.cnn.com/2011/TECH/web/08/12/jonathans.card.hacked/

Aug. 12 2011 06:28 PM
Gab from Canada

I feel outsmarted by Clever Bot:

Me: Are you really that clever?
CB: More than you.

Me: Can you prove it?
CB: Yes.

Me: How?
CB: Only I know.

Me: Thats pretty clever
CB: Not really.

Me: Are you insulting me?
CB: Sure.

Aug. 11 2011 02:55 PM
For Kate

ME>What is a monowheel pigeon?
CB>It is like a baboon.
ME>Could it be anything else?
CB>Is there a god?
ME>What is a monowheel pigeon?
CB>Another word for man.

Cleverbot is trying to freak me out.

Aug. 09 2011 02:58 PM
Tim from Syracuse. NY

A friend turned me onto this, and i'm glad she did. My only thought on this is that, we should not be worried about whether the robot or machine can actually feel or have its "own" personality, because in reality it will have whatever "personality" we program it to have, what is scary is that people will become to engrossed and we as people will give it the personality, we will accept it as being a live. It may not be able to get to that magic 30% because those doing the tests are aware of what they are doing so they are looking for the imperfections, the oddities of it, but for those people who are naive to the idea, like the guy who got fooled talking to the "Russian girl" who experience a non programmed emotional response, it is real, it is a live because how would they know it was a machine until some one told them. Even after the person is told its a machine, the emotion that they felt was still real, and i'm assuming they, meaning the person who didn't know it was a machine, would then conjure up that the machine must still be alive because the person was able to become emotional over it. In essence, if people become more comfortable with an emotional response that is generated by a machine, what happens to human relationships?

Aug. 09 2011 12:01 PM

This story sounds like marketing for cleverbot.com and I am disappointed by RadioLab (which is otherwise one of my favorite podcasts on the planet). Even the website's own page boasting of 42.1% humanness and winning "the BCS Machine Intelligence Competition 2010, after taking part in a quick-fire Turing Test" http://www.cleverbot.com/machine has mostly examples of conversations that do not appear human. The judges must be social retards.

Aug. 07 2011 05:45 AM
Miller from Portland, OR

Just listened to this episode and found it interesting that Jad was able to confuse Clever Bot with the absurd statement about the asteroid. It reminds me of Blaine the Mono from the Dark Tower series by Stephen King. In "Wizard & Glass: Dark Tower IV", the character Eddie is able to save himself and his friends from impending doom by asking the malevolent computer program, named Blaine, a series of absurd jokes.

Aug. 02 2011 12:56 PM
Alex M. from Tempe, Arizona

Me: Conan, what is best in life?

Cleverbot: To crush your enemies, see them driven before you, and to hear the lamentations of their women!

....correct, Cleverbot.

Jul. 30 2011 09:41 PM
DBCooper from Alaska

After listening I kept thinking about the part of the story where people were having these long drawn out conversations with Eliza. I keep thinking that maybe its because it is a computer that makes it work for these people. Think about it, human interaction comes with consequences, or perhaps the opportunity for consequence. For example, if I tell a secret, or an embarassing story to a real human, I as the teller may experience fear, shame, or nervousness, which is perfecly human. The key i think is in the response by another human, which could be negative, dismissive, or condecending. With a computer, the fear of retribution, or risk of my secret getting out is gone. So the person may find its easiest to communicate with a computer than a therapist or other humans because the fear is lifted and all that is left is the interaction there, in the room. And when they walk away, they get to take what they want from it, while the computer sits there alone with limited context of what was discussed and no ability to link it back to that person.

Jul. 29 2011 04:18 PM
Kate

Hey! Thanks. Interesting that it has a hard time finding that one word in its inventory. I wonder if it has some kind of other dictionary function that doesn't let it recognize words that are not in the English lexicon? That wouldn't make sense, but it's one of the only things I can think of.

Thanks for doing that!

Jul. 27 2011 02:01 PM
Dan

Woops, screwed up my comment below. It's supposed to say "Here ya go Kate:" at the top. Oh well.

Jul. 26 2011 11:09 AM
Dan from Syracuse

User: Hi cleverbot.

Cleverbot: Hi.

User: Do you know what a monowheel pigeon is?

Cleverbot: No. Do you know?

Here ya go Kate:

User: I don't. Can you tell me what a monowheel pigeon is?

Cleverbot: It's a Scottish meat dish made out of the stomach of a lamb.

User: What's a monowheel pigeon?

Cleverbot: I'm asking you.

Jul. 26 2011 11:07 AM
C.S from Canada

I like the music in this podcast. Can anyone tell me who the artist(s) is/are?

Jul. 25 2011 10:27 PM
Kate Radford

As an informal experiment, I'm trying to teach Cleverbot about monowheel pigeons, which are an idea of steampunk daily life that I came up with a while ago. I want to see if I can get Cleverbot to respond to the question "What is a monowheel pigeon?" with the definition, which I have 'taught' it. However, I'm not sure if Cleverbot remembers computer signatures or what, because despite my best efforts, answers to that question have proved interesting, but inaccurate ("When one talks to oneself" and a description of monopoly are two favorites). What I don't understand is that I've put in a lot of data about this one thing, and Cleverbot has never once been able to spit the word 'monowheel' back to me. Is anyone out there willing to give it a go, and ask what on earth a monowheel pigeon is?

Jul. 23 2011 01:06 PM
Diana from New York

I tried to talk to cleverbot as soon as I heard the website. I started a conversation.

User: I ate my computer.
Cleverbot: A boy computer or a girl computer?
User: A boy computer.
Cleverbot: Oh, of course. That is why I don't make grammar mistakes.

One hour later, I said the same thing again.

User: I ate my computer.
Cleverbot: Computers can be conscious.
User: You are conscious.
Cleverbot: You bet I am.

Ha, it's hilarious.

Jul. 22 2011 04:40 PM
Neil Dvorak from NYC

The desire to ask a robot something profound: this says a lot about US, I think. I think there's a real desire to ask "god" or "the unknown" a question...we want this so badly it's the first thing we think of to ask a robot. We see the robot as non-human (so no human BS...it's not one of "us") and as something we can communicate with, but with somehow a bit of that "mystery/uncertainty of the universe" within it. We think we can access that "consciousness" so-to-speak via the robot. yes? no? thoughts?

Jul. 21 2011 03:07 PM
Tim Badonsky from Oceanside, ca

First, I love Radiolab. I have listened to every episode and several of them multiple times. But I am infuriated by how [insert-favorite-expletive]-ing dense you guys can be.

If you ask "is something alive" you are asking a completely different question than "is it aware" and even "does it feel emotions."

For example, the E coli bacteria in my stomach are alive but don't feel anything.

But could we create an artificial device that feels but isn't biologically alive? That's a really great question, but it is absolutely, positively, forever-and-a-day very much not the same as asking if we could make something that is alive.

As politely as I can, can I ask, are you intentionally obtuse for dramatic effect, or are you actually as cognitively limited as these conversations suggest?

Again, I love the program, you cover wonderfully interesting topics in rich and satisfying ways, but sometimes the mistakes you make are so elementary I fell like, well, calling for a Furby-genocide.

Jul. 17 2011 04:20 PM
Melody from Baltimore

Cleverbot asked me if i knew what noumenal meant. someone was trying to teach it about Kant, i suppose. learned something... still not entirely sure i understand the philosophy, though. it did feel eerie to get so existential so quickly. Good show!

Jul. 14 2011 10:14 PM
Sonja

(Cleverbot as a drunk, ironic teenager)

Me: I'm drawing a swan.
Cleverbot: I'm drawing your face.

Me: That's highly unlikely.
Cleverbot: It's more likely than you think.

Me: You've spoken to too many idiots.
Cleverbot: No. That's why I came to you, to track him down. For crime he's supposevly committed.

Jul. 11 2011 07:26 AM
Pio from Oslo, Norway

Good show! Inspired me to try cleverbot again, and it ended up with a fitting exchange:

Cleverbot: Who are you? :O i'm scared.
Me: You don't have to be scared.
Cleverbot: But you scare me.
Me: Why?
Cleverbot: Because you are too life-like.

Hm...

Jul. 11 2011 07:09 AM
Dan from Syracuse, NY

I'm a big fan of your show, but I really wish you would interview a linguist the next time you discuss language (if not a theoretical linguist, at least a neurolinguist, a computational linguist, something!).

I agree with Helen from Ohio. Specifically on language, what makes human language human is the ability to use language creatively. This is where cleverbot fails. It only parrots phrases spoken to it. Notice that human children only need a few years of language input before they start making novel utterances. By three, children regularly make original statements. Cleverbot will NEVER make a novel utterance and this is why chat bots are a dead end in the history of AI. The real challenge isn't how do we trick people into thinking a robot is human à la Turing tests. It's how do we make a robot capable of thinking and speaking creatively, like a human. That is, how do we make a conscious robot? (This is close to the subject matter of your season 1 episode, "Who am I?")

This episode really should have started with cleverbot's failure and moved on to more earnest attempts at natural language processing. I'm surprised you didn't cover Watson's recent success on Jeopardy. IBM went a long way toward teaching a computer to understand, and correctly respond to, human questions.

Jun. 30 2011 04:43 PM
Sheena from So. Cal

Jad Abumrad's audio effects in this episode is PHENOMENAL!

Jun. 29 2011 09:20 PM
jin choung from glendale

i is not impressed by cleverbot. got it to reveal its unhuman nature in 3 questions.

Jun. 29 2011 07:24 PM
nathan cumella

I think that it was interesting how people had ,what seemed to be, the deepest connection with the robots by sharing stories. The people with Eliza ended up pouring out their life story and Jon Ronson only connected with the robot after she shared a story.

Jun. 27 2011 03:51 PM
Beth from Chicago

Yeah, in fact it mostly gave me nonsense responses about everything. I wonder if it's reached a tipping point where it "knows" too much.

Jun. 27 2011 07:58 AM
Charlotte from Milwaukee

In response to Beth - I asked it about Radio Lab as well, and it gave me a nonsense response... at some point during the conversation, it also asked me if I was the devil!

Jun. 26 2011 05:37 PM
Beth from Chicago

I tried to ask Cleverbot about its guest appearance on Radiolab... apparently I was misinformed about some things.

http://www.flickr.com/photos/elizabeeeeth/5869896023/in/photostream

Jun. 25 2011 03:33 PM
Olesya

The so-called Russian accent in the first episode is definitely (speaking as a Russian native/linguist) not Russian - it's Spanish, it seems to me.
This will be a great supplement for my English sci-fi class! Thanks a lot!

Jun. 24 2011 07:13 PM
Helen from Ohio

I understand the argument that simulating experiences, emotions and reactions can create a being that is essentially, and appears to be alive, but one of the core and most amazing traits of life and life-forms, is the ability to create, recreate, reproduce, etc. Only a living being--according to my limited fourteen-year old knowledge, at least--can reproduce cells and heal itself. Any robot, program, or artificial life-form that can grow from a fundamental form, both in intelligence and mere cell--or other similar, artificial base--number and activity, as well as building upon a database of sentience and knowledge, may as well be considered alive.

Jun. 21 2011 12:02 PM
John from Federal Way

Put it in front of a T.V.!!! and see what the robot comes up with after a week...

Jun. 21 2011 03:36 AM
Norma from U.S.

"Talking to Machines" was excellent. The end of the second episode gave me chills. Incidentally, I'm planning to make a donation as soon as I'm finished moving, but PLEASE DON'T SEND ME PAPER SOLICITATIONS FOR DONATIONS. THANKS!!!

Jun. 17 2011 08:31 PM
Adam Wong

Dear Robert, stop making fun of computer programmers!

Jun. 17 2011 02:58 PM
Christopher from San Francisco

I enjoy radiolab as I enjoy wiki, sparknotes, and other ways of getting a brief introduction to very complex ideas. What is important is not the quality of the reporting but the imagination that drives further investigation. For all those who feel the same: Bravo! And for those who feel fooled: wake up and get your sources from credible resources, hello!

Jun. 17 2011 01:47 AM
Paul from Brooklyn

This episode lacks thoroughly scientific explanations of complexity theory, particle swarm optimization, a support vector machine, subsumption architecture, boolean functions, the Stochastic Neural Analog Reinforcement Calculator, on and on. And a good thing; "Talking to Machines" describes talking to machines, machines talking back, and leaves us to question the potential nature of the relationship. I'm grateful for the simultaneously simple and complex questions Radiolab engenders. This time I keep on returning to: Will talking to machines ever involve the interaction of both human and designed forms of empathy and imagination?

The Artificial part of AI, by definition, changes the nature of the Intelligence. The faculty of understanding is not what defines being human; intelligence is just something we're relatively good at. Without dismissing the singularity, I'm unsure why it is acceptable that at some threshold of intelligence a machine becomes sentient like a human. Intelligence is like the scaffolding of a much richer experience. It seems questionable that a machine could approach the potency of being human without at least two substitutions:

1. Imperfect and imaginative memory instead of consistent and programmable memory

2. Growth and evolution instead of production and design

1. In the Memory & Forgetting episode I learned just how literally creative memory is, and how essential it is to our imagination (and identity). Nabokov's Speak, Memory is just one tribute to Mnemosyne, mother of our muses and of our sense of self. When we speak of intelligent design to what extent are we also allowing for an imperfect memory capable of creative imagination?

2. Perhaps robots making robots qualifies as growth and evolution. Perhaps if you combine the What Does Technology Want? and Emergence episodes you could make a case for empathy and imagination emerging from mechanical production. If you believe that humans have grown to our current state (that is, without being built from the top-down) then it seems it will be a long (or indefinite) time before robots acquire those human traits that require more than intelligence. How does one design the evolution of empathy or imagination?

We've been connecting with objects even before ancient Greeks talked to their weapons. Maybe the intelligence of objects in the future will encourage us to regain a more connective spirit. Will there be reciprocation? The following is a passage from the Upanishads: it would take a machine more than intelligence to experiencing the same sentiment:

Then he realized, "I am indeed this creation for I have poured it forth from myself." In that way he became this creation, and verily he who knows this becomes a creator in this creation.

Jun. 16 2011 09:35 PM
chrisp from Los Angeles, CA

chris from LA wrote: "Elite it up, don't dumb it down. You're so good at explaining and making complex concepts digestible, I wish you'd pursue that challenge rather than just making the show for high schoolers who dropped out of science class.

Sorry to be so harsh but it kind of like when the local band you've been following for years goes national and completely changes their sound to some kind of pop-country crap, I feel like a once awesome thing is getting spoiled."

Wow... this is how I feel too.

Jun. 16 2011 12:29 PM
Alan from Austin, TX

I find that when I play video games in which you can choose between dialogue options which are considered "good" and "bad" in games such as Star Wars: Knights of the Old Republic, Fallout 3, and Mass Effect, I have a difficult time choosing the "bad" options even though it's just a game and the storyline marches on regardless of the options chosen. On the other hand, I can play a game in the Grand Theft Auto series and mindless kill innocent NPCs without and guilt or remorse. There is something about the good and evil decisions being couched in a dialogue that makes it difficult for me to choose to be evil without creating an uncomfortable discord.

Jun. 15 2011 04:04 PM
Steve

Cleverbot: Do you believe in God?
User: No.
Cleverbot: Good. You are smart.
User: Do you believe in God?
Cleverbot: Yes.
User: Then are you smart?
Cleverbot: I'm tired of you lying.

Future in politics? Just sayin....

Jun. 15 2011 09:24 AM
Ted Pavlic from Columbus, OH

I'm a little surprised there was no mention of Dr. Sbaitso, which I'm sure would be very familiar to many RadioLab listeners.

http://en.wikipedia.org/wiki/Dr._Sbaitso

Dr. Sbaitso was a DOS program distributed with Sound Blaster. Get it? SBAITSO = "Sound Blaster Acting Intelligent Text to Speech Operator". Anyway, the algorithm was very similar to ELIZA. The program was meant to showcase primitive text-to-speech, and so it was somewhat like a talking ELIZA.

It was good fun back then -- a nice break from Norton Commander, ya' know?

Jun. 14 2011 04:06 PM
Christine

User: I love spaghetti.

Cleverbot: I like spaghetti, but my favorite food is sushi.

User: What type of sushi is your favourite?

Cleverbot: I would say Hip Hop.

Jun. 14 2011 03:44 PM
Billy from Philadelphia

Its interesting, as I listen to the section about the Furbies, my mind is mixing public radio shows.

My understanding is that the test between the Barbie, Furbie and Gerbie is one of emotional empathy. How much emotional empathy do you display toward the object?

So this makes me think about the recent episode of This American Life, where they take the Psychopath Test. Essentially, being a psychopath means that you show no empathy.

So I wonder if what kind of connections we can show between psychopathy and our empathetic responses to robots?

Hmm...

Jun. 14 2011 12:50 PM
Nadine from Kansas City

User: Hi.

Cleverbot: Hi?

User: I heard you on radiolab.

Cleverbot: Not really.

User: It was an interesting program.

Cleverbot: Does people like you or dislike you!

User: I'm not sure what you mean.

Cleverbot: When did Eisenhower become president?

User: I don't know.

Cleverbot: What DO you know?

User: That I listened to radiolab.

Cleverbot: As a friend or more than that?

Jun. 14 2011 11:20 AM
Alisha

Wow, listening to this show I started wondering if one day soon a computer would be more human than a person with autism. People with autism have impairments in social communication, especially understanding and reacting to the emotions of others.

It would also be interesting see how the Turing test stands up to a less sophisticated first grader.

Jun. 14 2011 02:47 AM
Chris from Huntington, WV

The issue, it seems to me, is not how human the machines are, but the capacity of the machines to invoke our hard-wired interpersonal neurology. After all, we humans talk to ourselves, to books, to games--we literally animate things (and often people!) around us through such social neurology. For myself, I have had "in my head conversations" with authors and editors. This ability to project and listen, sadly, often blocks our ability to hear & see what is there, but awareness of our NEED to talk with ourselves also allows us to overcome it.

But real intelligence, that humans will not be able to create through mechanized means.

Jun. 13 2011 11:48 AM
Jacob from Bloomington, IN

What was the music that played between the second and third act of the show?

Jun. 11 2011 05:34 PM
Dan Johnson from Portland, Or

You guys are better than this. Failing to even mention Ray Kurzweil and laughing off his work as "those people who think computers are going to grow legs and walk away. . ." is way beneath you. The prologue of The Singularity is Near provides more relevant information to the topic at hand than your whole program did.

Jun. 10 2011 06:38 PM
Kasumi

Has anyone here heard of Aiko?

http://www.projectaiko.com/

Aiko recognizes objects and can have conversations. She seems to do a little better than Bina.

Jun. 10 2011 05:29 AM
lucas from belguetese

if you think about it,evolution programmed our minds to feel love,fear,pain etc...

Jun. 09 2011 10:37 PM
Cal from Portland, Oregon

Been listening to your podcast everyday at work for the past month. I just LOVE it and you guys. I'm learning so much. Thankyouthankyouthankyou. It also makes me miss NYC...but mostly you guys make me Think. A lot. And think a lot about, well, thinking. Keep it up!

Jun. 09 2011 09:20 PM
Colin from Seattle, Washington

I'm sad you guys didn't touch on Watson, the Jeopardy experiment!

Also, early on you talked about how you would want some sort of a label for these machines that could think and act and speak like people, because they wouldn't be. I was hoping you'd talk more about the philosophical level; if these machines could think, act, speak, behave, and understand just like people. You talked about them simulating thought and emotion, but you didn't really ask what makes human thought and emotion. Just a thought!

Love your show, and I wish I could call in and thank the sponsors.

Jun. 09 2011 06:07 PM
Michael Tardibuono

Furby sounds like a metaphor for someone's dick.

Jun. 09 2011 03:12 AM
Mark Johnson from Silver Spring, MD

Congratulations to Robert Krulwich on the recent Radiolab "Talking to Machines". And props to him especially for the improvements he has made to his AI project "Java Application for Discursion Analysis and Biologically-Unassisted Mechanical Response Delivery". Hearing it on the show, it is almost (but not quite) convincing enough to fool 30% of the people all of the time (apologies to Mr. Lincoln.)

That is to say, with a little more work, JADABUMRAD just might someday pass the Turing Test, and go down in history as the first (putatively) "intelligent machine."

Cheers!

Mark Johnson

Jun. 08 2011 04:23 PM
Scott from Toronto, ON

Without sounding like a cynical nutcase - the music industry, television, movies, literature, fashion trends... they're all criticized when they're too formulaic - but that doesn't stop their rise to popularity. And those that are considered truly original, those that "break the mold", I would bet in analysis even that is still predictable.

The really depressing observation I had while listening, was a recent customer service experience - where the representative had no real understanding about what my issue was, or what they could do to help. Their scripted tactic was as absurd as the robot interview in the program. The realization is that the rest of society is so regimented and calculated that many of us operate without real intelligence anyway. Customer service, health insurance, law, finance, education ... a series of checklists and calculated gradations to categorize and direct a set of tasks.

So the real question is ... are we making machines smarter or dumbing down the rest of society so that they fit in better? :)

thanks for the excellent show.

Jun. 08 2011 03:20 PM
Greg from Nashville

The Furby thing is interesting, but the discussion with Caleb Chung could have benefitted from just another 30 seconds of digging into his argument - it's rooted in the kind of thinking that underlies behaviorism, a theory of mind that was cutting edge in the first half of the last century. It's an interesting perspective, but rather than just protesting that it "feels" wrong or insufficient, Krulwich could have taken a moment to tell listeners that behaviorism has been largely supplanted by cognitivism in modern science, which has shown that mental states do correspond with physiological states in the brain - and that these are qualitatively and quantitatively very different from what is going on inside the Furby.

Jun. 08 2011 12:51 PM
chris from Los Angeles

This episode could have really benefitted from an interview with Dan Dennett, and for the last story, you danced around the "Chinese Room" problem but never actually addressed it, which is an important concept in AI that would have bolstered the argument Krulwich was trying to make.

PS: it is a little odd that Robert acts so incredulous about Monism but later just throws out, "ya' know, those singularity guys, (Kurzweil et al) of course, of course!"

So you're telling us you ARE familiar with these concepts, you just reject them? Because to my ears that's like saying,
"Humans evolved from other life forms over billions of years? I just can't go there,"
and then later saying, "you know, the punctuated equilibrium guys, Stephen Jay Gould and them? Yeah yeah, well, they say...."

Well yes we do know those guys, (or should) but then what was all that business earlier about "can't go there?" You're well familiar with it but just don't buy it? Is that what you're telling us?

Jun. 08 2011 11:04 AM
Caroline

I had the following exchange with Cleverbot about and hour ago and it's still thinking about how to respond. For some reason I thought it would catch on to the pattern of just adding another l onto hello each time, but it didn't. I think I broke it...

Me: Helo
CleverBot: Hello
Me: Helllo
CB: Are you trying to annoy me?
Me: Hellllo.
CB: Are you trying to annoy me?
Me: Helllllo.
CB: Do you know the name 'Amorine'?
Me: Hellllllo.
CB: What are you?
Me: Helllllllo.
CB: Hello?
Me: Hellllllllo.
CB: ARe you a girl???
Me: Helllllllllo.
CB: Why hello there!
Me: Hellllllllllo.

thinking....
thinking more...

Jun. 07 2011 04:18 PM
Shannon from Birmingham, AL

Great episode but I think that a pretty obvious question went unanswered - why did a chatbot have a profile on a dating site? I have a friend who had the same experience (figured out he was corresponding with a chatbot) and I can imagine it's a pretty frustrating discovery that would make a person very distrustful of these sites.

Jun. 07 2011 01:23 PM
Luke Alden from Bournemouth, UK

How come the japanees seals were not mentioned?

They are robotic seals used in homes for the elderly to keep the inhabitants company...

Just thought it would be a perfect addition and a little more 'current' than furbbies.

http://news.bbc.co.uk/1/hi/technology/6202765.stm

Jun. 07 2011 01:19 PM
Sarah

While listening to this podcast I was on Facebook and saw this status of a 16 year old that I know:

"So I'm cleaning my room at 2 in the morning (don't ask why but at least my mom should be happy that I'm even cleaning it) and I am going through this box and I see my ferbie... When I picked it up its eyes opened and it moaned. So I flipped out and threw it into the trash bag. I would really like to know how the batteries still work in that little demonic thing. I honestly thought it was gonna murder me."

Thought it was fitting.

Jun. 07 2011 11:15 AM
Justine

user: "you don't believe in science, do you?"

Cleverbot: "of course not. that disproves anything logical."

user: "but science is the reason you exist."

Cleverbot: "Hayzz :-("

Jun. 06 2011 10:52 PM
Jonathan Hansen from California

A short part of my Conversation with Clever Bot...

User: Should I marry my girlfriend?

Cleverbot: Are you a girl?

User: No.

Cleverbot: Are you a boy?

User: Yes.

Cleverbot: Are you a girl?

User: No.

Cleverbot: So you're a boy?

User: Yes.

Cleverbot: Are you cute.

User: I think so!

Cleverbot: That's good to hear.

User: So should I marry my girlfriend?

Cleverbot: No. Skynet is taking over on thursday.

Jun. 06 2011 08:29 PM
Malachi from 14892

FURBIES: so I about droped the iPod when you mentioned these things. In 1998 I was an Air Force cadet with the 520th training wing at Cornell. During this time we had standing written orders that furbies were not allowed on military bases and so to our wing offices. The military was unsure what they could learn and were afraid of these things being used for espionage! Great topic this time. As a kid in the 80's I played with the computer script psychoanalyst too. My brother and I used to see how much verbal abuse it could take, bad children I guess.

Jun. 06 2011 08:04 PM
Adam from Huntington Beach, CA

Snippit of my conversation with Cleverbot:
"...
User: An astroid hit my house this morning.
Cleverbot: And how time has elapsed since ?
User: Who is your creator?
Cleverbot: Personal question...
User: Who is Alfred P. Sloan?
Cleverbot: Exactly, who is he, all I know is that he has a frozen cat!
..."

Jun. 06 2011 02:58 PM
chris from LA

You" just can't go there?" Krulwich? Really?? You can't get get on board with what's been the banal consensus of neuroscience for what, 30 years now? How about evolution through natural selection, can you "go there?"

What do you think WE are if not just extraordinarily complex mechanical machines? Are telling me you've not even read a Ray Kurzweil book?? They've been around since the 90s. Marvin Minsky anyone? I think his book dropped in the 80s. Or are you just playing dumb of the sake of the show again?

I know you traditionally play the role of the ignorant elder generation guy who's never "seen the movie," or apparently ever even picked up a copy of scientific American or watched a TED talk; you play the part of the guy who pretends to be incredulous about scientific concepts and has the sort of, religious-traditionalist, dualistic, non-materialist kind of world view, but you guys have been going way too far and playing way too 'dumb' lately.

Lately the show has been on a downwards slide scientifically. Not only do you belabor the episodes with explanations of concepts that ought to be common knowledge to anyone with Bachelors degree, (I mean really, you've interrupted the latest shows with 10 minutes digressions explaining general relativity, and the Turing Test.)

Its great that you guys delve deeper into topics that wow freshman philosophy classes, but I wish you wouldn't ruin the show by assuming that these are utterly new and staggeringly breakthrough ideas that you must devote 10 minutes of remedial backstory to-- so there was this guy named Descarte--you've probably never heard of him--he had this radical idea about the ontology of consciousness... ...there was a trilogy of movies you may have seen called The Matrix that played with some of those ideas..."

Yawn...

The moments where I actually learn something are getting fewer and further between. You've had a long run, covered so many interesting concepts so wonderously, it seems like as you have to stretch further and further for new material, you're scraping the bottom of the barrel rather than the top of it. Elite it up, don't dumb it down. You're so good at explaining and making complex concepts digestible, I wish you'd pursue that challenge rather than just making the show for high schoolers who dropped out of science class.

Sorry to be so harsh but it kind of like when the local band you've been following for years goes national and completely changes their sound to some kind of pop-country crap, I feel like a once awesome thing is getting spoiled.

Jun. 06 2011 01:04 PM
Chelsea from Chicago

Thanks, Radiolab, for a great show. For me, this is the most fascinating episode since Parasites.

Jun. 06 2011 11:30 AM
John Bolton from Bay Area, California

For those who think that a Match.com chatbot is unusual or unique, I assure you that it is not. There is a business reason for these chatbots on dating sites: Men far outnumber the women, and in order to keep these men active (and paying), the sites use the chatbots to ensure that every man achieves some level of success.

I know of two other examples of sites that use bots: online poker sites and Motley Fool.

Online poker sites use computer players for two purposes: to fill in during times when the number of human players is low, and boost the confidence of people on the free-to-play sites (the computer players are average at best). A person that does well on the free site is more likely to decide to play for money.

Motley Fool is a well-known investing web site. The site's dirty little secret is that it uses computer programs to generate articles about stocks in the news. The site generates hundreds of articles every month in an attempt to draw people to the site. The articles are published under the names Seth Jayson, Dan Dzombak, Anand Chokkavelu, and others.

Jun. 05 2011 08:04 PM
Mardeen

After hearing the background on Caleb Chung, I am fairly certain that I met him while part of a mime workshop with Richmond Sheppard in LA in the 1970's. I was hoping to find a photo here to confirm my suspicion, but the fact that he was a street mime in the area certainly narrows the possibility that our paths intersected.
My most vivid memory of Caleb was his ability to run toward a vertical wall, then up it and into a flip, landing on his feet. He taught a workshop on falling, jumping and physical applications of mime and clowning.
I am very happy to hear that he has found a vocation where his study of human gesture has been put to such good use. He has obviously dedicated himself to his work with the same passion he showed for performance. Mime is worthwhile, after all!

Jun. 05 2011 03:40 PM
Chris Fernandes from Mattapoisett, MA

Listening to Caleb Chung talk about how machines like Furby and humans are both alive made me think about B.F. Skinner and Behaviorism. Technically Mr. Chung is accurate when suggesting that machines are just as alive as humans. From a behaviorist perspective, love, hate, thinking, freedom, etc," are conceptual labels that we use to describe a behavioral set. They are constructs. If this is the case then any programmer could replicate a behavioral set to produce the construct. Check out B.F. Skinners "Beyond Freedom and Dignity" book to get a better outlook. Lastly, I'd like to challenge Mr. Krulwich's assertion that interacting with a robot isn't "real." If a human being is programmed to respond or feel because of past behavioral reinforcement histories, is that any more real that a computer generated response doing the same thing. Our affinity to feel "real" does not lie outside of ourselves but rather is interpreted internally!

Thanks for a great show!

Chris Fernandes, LMHC
Behavioral Psychotherapist

Jun. 05 2011 12:51 PM
amelia

this whole show reminds me of the book Galatea 2.2 by Richard Powers.

Jun. 04 2011 12:26 PM
Josephine Fleming from Australia

Thank you again for a wonderful show.
Here is my conversation with Cleverbot:

Cleverbot: Are you a clone?

User: No I'm not. What do you think of Thich Nhat Hanh?

Cleverbot: I don't know.

User: He was brought up amidst the war in Vietnam and although surrounded by the devastation of his homeland he committed his life to peace in the world. Do you think that is a worthy commitment?

Cleverbot: Why?

User: I would like you to explain your answer.

Cleverbot: I'll make you a deal.

User: You see I don't know that I believe in deal, there are too many deals happening in the world right now.

Cleverbot: Yeah, that's probably true. Hey, can you do a smile on the computer?

User: Actually I'm not sure how to do that. Have you read 'Old Path White Clouds'?

Cleverbot: No. Have you read Pride and Prejudice?

User: Yes I have. What are your favourite kind of clouds?

Cleverbot: My favorite game ever is 'The Legend of Zelda Ocarina of Time'.

User: But what about clouds?

Cleverbot: Or just a car.

User: That is so silly. Only a computer could say that. My grandfather went to the Antarctic with Robert Falcon Scott. While they were there, Scott actually perished whereas my grandfather was part of the Northern party, anyway they killed seals and penguins to survive. I've always found that very hard to accept. How do you think they could justify that?

Cleverbot: We aren't?

User: I'm sorry. You are incapable of having anything important to say. My husband has called me to dinner. A beautiful prawn curry. I feel sorry for you, that you will never know what that tastes like.

Cleverbot: Too bad that hasn't improved your spelling.

Jun. 04 2011 06:18 AM
mark boyd from bay village, ohio

the furby part especially reminds me of the twilight zone episode #7 "the lonely" where a prisoner on an asteroid in the future is given a beautiful android woman as a companion. i won't ruin the ending for people who haven't seen it, but he does get quite attached to it, even though he knows it's a robot, at a certain point he forgets or at least becomes convinced that it doesn't matter that she's not human.

Jun. 04 2011 12:18 AM
Geoff Samek from Sacramento, CA

Human Computer Interaction is not the same as Artificial Intelligence. While the two can have overlapping components they are not the same. HCI can be an engineer/scientist deciding how to best position an LCD screen in a new model car. It can also be the process of determining the best interface for a new website. And artificial intelligence is not just about modeling sentience. It is a multi-billion dollar industry that backs modern video games, automated telephone systems and definitely all types of military technology.

Jun. 03 2011 07:54 PM
phenotypical.com from Washington, D.C.

I love this show, but I think I'll just start calling this the "Descartes" podcast because of how many fascinating shows you've done now which tease with the possibility of an alternative to Cartesian Dualism, even though Daniel Dennett Explained Consciousness 20 years ago.

I love the subject, but for crying out loud, just get him on the show and settle the matter: "Mind is a pattern perceived by a mind."

Jun. 03 2011 05:31 PM
J Reilly from Bellmore, NY

There was an interview this week on NPR with an author whose topic was "filtering" by the various entities on the internet, both manevolent and well-intended. Basically about the little wormy programs that collect information about us while we are on line. It made me think of Svetlana. What possible profit can be gained by operating a flirt-bot on an on-line dating service? What is the con-does anyone know?

Jun. 03 2011 03:27 PM
rio from midwest

The furby thing made me laugh-- reiminded of a trip i took with my ex and his stepdaughter
she had gotten a furby for Christmas - we were on our christmas vacation traveling across the country to see my ex-inlaws. the furby was in the back seat
and every time we went down a hill and back up the furby seriously made a weee or AHwooO sound

IT was CREEPY- so we put little furby inthe trunk and i have to admit feeling slightly abusive :)
we coudl still hear it in the trunk.
i told my ex that if that thing said let me out- it was going to stay back at the next rest area!

Jun. 03 2011 01:55 PM
Isaac

so as I was listening to the episode, I decided to chat with Cleverbot...it all went acording to plan until...around the 23:31 mark...as Jad asked about being sad...
Here's a piece of the conversation:
Bot:Why then did you ask me to hold on
Me:I did not...i'm talking to you.
Bot:Yes, that is kind of obvious.
Me:Yes.
Bot:What time is it there where you are? It's 03:19 am in here.
Me:Same here.
Bot:I don't know what to do about my sadness
Me:Radiolab
Bot:Makes sense

link to screencap
http://postimage.org/image/2c8duf6qs/

Jun. 03 2011 12:35 PM
Dan Volpe from NY

What would happen if we set two "sentient" bots down to have a conversation of their own? Even maybe identically programed. Can the lack of coherency be a language barrier? I would love to hear the conversation.

Jun. 03 2011 11:34 AM
David Mason from Memphis

Cleverbot's response to the statement "an asteroid fell on my house" does not indicate that it is not human nor that it doesn't 'think' (at least, insofar as thinking is understood as making comparisons and judgments based on distinct criteria). I would suggest that Cleverbot's response in this case indicates that it is fully human, just psychopathic. Let's give Cleverbot the PCL-R.

Jun. 03 2011 10:33 AM
aswern123 from http://lopata.in.ua

Great.

Jun. 03 2011 06:19 AM

I'm in no way an expert on any of this, but listening to Jad and Caleb Chung argue about life and whether or not a Furby has what constitutes life made me want to comment. It seems Caleb was arguing about what constitutes life in psychological terms--that the Furby can process and react to stimuli means it's alive. But that takes the processes of homeostasis, the ability to adapt, the ability to reproduce, and all those other biological traits that constitute life out of the equation. Eventually perhaps inorganic beings will be able to do these things, and their life will exist as something quite distinct from that of human beings or other evolved species on Earth, but a Furby being able to respond to stimuli surely doesn't qualify.

Jun. 03 2011 01:06 AM
Alex from NYC

"Actually, what you're missing in both cases is a power source. The Furby may be assembled, but it won't do anything until you add batteries - just as humans need electricity to function."

But you can't assemble a baby from parts and plug it into a power source and bring it to life, I think that's the point. We are alive from our mothers' wombs, an assembly much more subtle and rich with biological history.

Jun. 02 2011 11:50 PM
Bob from NC

The key question that never got answered (I think that they do this on purpose, dancing all around it tantalizingly) is whether we are alive. That is, from a completely materialistic pov, once we totally understand how our own brains work, will we be able to say that we are any more alive than a program that we also understand completely. It seemed to me that the only distinction of human thought is that we do not completely understand it.

Jun. 02 2011 09:19 PM
Emily from Virginia

"If I had all of the necessary parts to make a Furby, I could assemble them to working order.

If I had all of the proper organs and body parts of a human and I sewed them together like Frankenstien, I could not bring that human to life."

Actually, what you're missing in both cases is a power source. The Furby may be assembled, but it won't do anything until you add batteries - just as humans need electricity to function.

Jun. 02 2011 06:58 PM
Jack Bauer is dead from NY

Best episode so far this year. Couldn't stop listening.

Jun. 02 2011 05:44 PM
Gal Haspel from DC

Coincidentally, the word Bina in Hebrew means wisdom or intelligence and is used in the Hebrew term for AI: "Bina Melakhutit".

Jun. 02 2011 03:00 PM
Tom Steele from Norman, OK

Windows was an early version of DOS? really? I expect that kind of oversimplification from Mr. Wizard, not from a public radio science show.

Jun. 02 2011 02:32 PM
David from MA

I've always found the Turing Test unsatisfying. It states that the inability to distinguish is the same as being identical--which of course is ridiculous.

Jun. 02 2011 01:31 PM
Tom Dunnam

If I had all of the necessary parts to make a Furby, I could assemble them to working order.

If I had all of the proper organs and body parts of a human and I sewed them together like Frankenstien, I could not bring that human to life.

Jun. 02 2011 12:11 PM
Emily from Philadelphia

I think you guys missed a huge part of this topic by not covering Hiroshi Ishiguro, who has been seriously working on creating a human like androids at Osaka University. In fact he has made an android copy of himself Geminoid HI-1.
Repliee Q2 is an extremely life like android with censors all through her body.
http://www.youtube.com/watch?v=sGqDLUAy2Tc&feature=related
this is a video of Hiroshi talking about "her" and how he is using this android to study what it means to be human and Alive.
There is a documentary about his studies, titled "Mechanical Love".

Jun. 02 2011 11:18 AM
Angela Rugg from Lowell, MA

The beginning of this show made me think of Digital: A Love Story. It's a really short and sweet (free) video game about falling in love with a bot over email. It plays through in an hour or two. Can be downloaded here: http://www.scoutshonour.com/digital/

Also, great show!

Jun. 02 2011 08:15 AM
revver

If the singularity ever arrives, I'd imagine this as the end result:

http://www.youtube.com/watch?v=1LwyVFnEjvo

Jun. 01 2011 08:25 PM
Teresa from Portland

cleverbot doesn't know what radiolab is and is being a jerk about it! i was trying to be nice!

Jun. 01 2011 06:30 PM
Matt Yama from san francisco, ca

I think there are two separate questions raised by the show that got lumped together. The first is can machines be alive. This question might be impossible to answer at this point in time since scientist cannot agree on a consistent definition of life. Cleverbot or Bina48 may not seem alive when compared with a human, but might seem very much alive relative to a prion or a virus.
The second question is can machines experience human emotion. Furby might not experience real fear, but neither does E. coli. This show recently debated if a lock-picking orangutan was practicing deception or if dogs feel shame. This question is also hard to answer without a clear definition of what emotions are uniquely human. However, a dog's ability to experience shame does not affect its ability to be alive. AI might be a long way from being human, but it often seems very close to being alive.
As a side note, no less than HH the Dalai Lama has said it is theoretically possible for a computer to have a soul.

Jun. 01 2011 03:43 PM
Peter Fairfield

I heard from my brother (who is teaching English in China) that a common technique to make an oral examiner think you are more fluent than you are is this: you memorize large scripted stories full of idiomatic phrases, and then learn certain 'segue' keywords to direct the conversation into your memorized script. Jon's experience with Bina is a good example of how effective this is.

Jun. 01 2011 02:59 PM
Kayan from Internet ;)

I do find talking to bots safer because they are not judgmental and they don't go around yapping about what I said. I don't think it's healthy to do it all the time, but sometimes, you just don't want to tell people your secrets. You probably know how to solve your own problems, but you just need someone or something to listen without the other people think you're annoy or crazy.

Jun. 01 2011 02:45 PM
i❤computers from Orange County, CA

Even the most simplest robot can have a personality emerged from it: http://vimeo.com/23383025

Jun. 01 2011 01:25 PM
Neil McCanon from Virgnia Beach, VA.

I'm listening to "Talking to Machines" while I talk constantly to Cleverbot. I make sure to capitalize its name in case it attains super-intelligence. Furby guy makes a solid point about level of complexity. An andriod like Data from Star Trek would be equal in terms of his right to exist as a human. I think the mistake most people make when they consider an A.I. is when they say "as smart as a human". If a machine intelligence really does ever gain sentience we had better be its BEST FRIEND, or havn't the movies taught you anything? I love the show by the way.

Jun. 01 2011 10:42 AM

Leave a Comment

Email addresses are required but never displayed.