“Cybernetics, Art and Creativity” – Question and Answer Session

Download mp3 HTML Transcript

“Cybernetics, Art and Creativity” – Question and Answer Session

Speaker 1: Again, thank you, the three of you for your participation in the panel. I’m sure there are going to be questions. Raise your hand if that’s the case.

Speaker 2:
[inaudible 00:00:26] of intent.

Speaker 3:
[inaudible 00:00:28].

Speaker 4:
I have a question about in terms of programming these machines with your [inaudible 00:00:39], I didn’t understand it. Is that [inaudible 00:00:42] perception be programmed? Should it correspond very roughly to, I guess what the human perception [inaudible 00:00:56] or should be it [afforded 00:00:58] in that aspect. What do you think [inaudible 00:01:02]?

Speaker 2:
I think it depends on your purpose in terms of … It always depends on your purpose, doesn’t it? I’m not sure what you mean by programming perception in, but obviously one of the ways we can change the way that we interact with the world is by putting filters in front of and having processes, having machines process sounds and light in any manner that we choose. It depends on what you want to do with it, what the user wants to do with it in terms of what kinds of transformations are involved. I guess that’s the best answer I can give.

Elena:
This is more of a question, is can we activate the microphone when people ask any questions?

Speaker 1:
Sure.

Speaker 5:
That’s a really smart idea.

Paul:
That’s a good idea, yes.

Speaker 3:
I could try.

Speaker 1:
It’s even on ready.

Speaker 3:
Do you have a question?

Elena:
Yes, I wanted to ask Paul about Gordon Pask’s psychological persons and how they might have a dialog with one another internally and with the machine, and could that be more than a two-way dialogue?

Paul:
Elena is referring to a notion that Pask called psychological individuals, P individuals to be distinct from M individuals. There are 3 M individuals here, but they are a morass of P individuals or as a colleague of mine, Claudia Mamura likes to call them, P cells. We are many cells and this is a manifestation of the internal dialogue that Pask wrote a lot about, and says that if I’m having a conversation here with Andy, it’s the same structure as if I’m saying to myself “Well what do you think about that?” “Well, I don’t know the talk wasn’t very interesting.” “Yeah, but last time you were better.” “I know but I wasn’t feeling. “

This is the same thing. Elena is asking a great question and it’s where I’m going with this idea of asking questions. If you can get a machine to take multiple perspectives which may be inconsistent with one another and inconsistent with you, it’s in that debate in which you learn things, what your preferences are, what the trade-offs might be and how you might decide to act, based on values that you choose as a result. Could we set off a cloud full of individual processors, each of which are doing a Paskian calculus. I’m not kidding here and I’m not just waving my hands, to look into a belief system and provide alternative ways forward in that belief system, either and evolution or a set of actions that would be contradictory or not.

Then the question is how does the machine decide which of the thousands it could show you, which one? My friend Michael Gaigan has an idea, which is some way of deciding on satisfaction or minimum cost for the machine, which might be how much power the computation might require to hold one belief versus another.

Speaker 3:
I’m beginning to think that the library [of parliament 00:04:33] is your machine, and now I’m beginning to understand. We had this conversation before. As I said, we get … All the analysts here get a bazillion questions from a range of people that all have belief systems. It’s called affiliation to a party, and they all have to follow it. We’ll get these questions and we have to answer in a non-partisan way. Questions are poorly put because they’re filtered through a central intake system, basically a person who’s on duty and they decide to farm out the questions they think it might relate to.

Your job as an analyst, the very first thing you’ve got to do if you’re doing it well is get a hold of the client, and try to figure out what’s the context of this question.

Paul:
Yeah. Why do you think I was trying to pick your brain earlier [crosstalk 00:05:32].

Speaker 3:
Yes, that’s exactly it.

Paul:
I think you have a guide for me.

Speaker 3:
Now I will need to answer it in a way that’s not going to cause me a huge cost of what you’re just giving me … You’re giving me a spiel. I don’t trust you. You need to develop a trust relationship, that first you have the expertise that they’re seeking, and you need to know what the context of the question is. Often it will be political, but actually it can often also be, oh, actually this is a staffer who actually has to write an essay. I’m using the library for the wrong reason. There is this iterative process of determining what the real question is and then trying to get them to it.

Paul:
Let’s build it together, shall we?

Speaker 3:
Comments.

Speaker 1:
Thank you.

Speaker 4:
That was helpful because I’ve been trying to formulate this question about conversation, and whether or not this ideal machine could be created. Is it or will it be created now that you have the ideal collaborator? Does it depend on the person who is using it having these powers of conversation, because that is also something that even interpersonal, person to person communication requires a lot of development and skill. I work in the summers as a national park interpreter speaking with guests from all over the world. A few years ago, one of the big new things in interpretation is conversation-based interpretation, rather than standing up on the stage and talking to people and assuming to know what they want to know, using conversation to develop and answer to their questions that they don’t quite know they have until you’ve asked them the right questions and got the right questions from them.

Paul:
Bingo.

Speaker 4:
That is tough to do. I guess, would part of this machine be teaching conversation and how to do that, or how does that work into your thoughts about it?

Paul:
There is a lot in what you just said. Certainly the participants matter, the human participants as well as the machine. Really what I’m trying to do is make a better machine participant for conversation in which the goal is better questions, so one way of describing it. I would worry about the word ideal by the the way. I know you’re trying to be flattering and I appreciate that. It would be impossible to be ideal.

I think conversation is the only way to go. I’m going to punt a little bit on the details here in the interest of time. There’s no substitute for a conversation in which we can be who we are and become something else, and that co-evolutionary drift is the important thing about being human. I’m not expecting the machine to be great at this. Please don’t misunderstand me into thinking … I’m not saying you are, into thinking that I’m going to make a great machine the way Google’s a great machine. No, that’s the problem. Google’s trying to solve the problem of indexing the world’s knowledge and making … Well, that’s what they say.

Anyway, the point of the story is perfection in that realm is the problem I’m trying to move away from. I don’t want perfection. I don’t want an ideal. I want a dialog, I want a conversation. I’m not expecting the machine to be great at this, any more than I’m going to be great in my conversation with you, but I’m going to have a place in it and together we’ll go where we couldn’t go separately.

Speaker 4:
I guess [inaudible 00:09:01] how this type of a thought experiment thinking about making a machine that can do a better conversation has powerful implications for people developing better skills at conversation. I think it’s an interesting spin-off [inaudible 00:09:14].

Paul:
To answer your question more directly, I think it would make humans better at conversation if it were better. I’m not sure that would be an explicit goal. I think it would be an outcome. Learning to learn would be the ultimate.

Speaker 2:
I think there’s a … I don’t know, let’s say certain degree of sophistry in this, but maybe hubristic sophistry [crosstalk 00:09:37], something of that nature.

Speaker 3:
Heuristic.

Speaker 2:
No, heuristic, hubristic, whatever, all the same. The challenge obviously is to make conversation that everybody can benefit from, and that means you have to have a common experience base. I guess the classic example, prototype example is the experiment a number of years ago with kittens who never really learned perception properly if they didn’t have their feet on the ground, when they were being moved around in a optically challenging environment. The ones who were walking around learned, and the ones that were being carried around in a little vehicle didn’t learn.

The same way, I think when you talk to people you have to have a common experience base of some nature. Machines are never going to have that common experience base. They will be able to collect everybody else’s common experience base, but they’ll never be able to internalize it in any fashion as far as I can imagine. The classic example, I spoke one time with somebody who I couldn’t possibly understand how he could have done it, but he won a medal of honor in a battle in the Pacific during World War II, doing something that nobody human could do, but he did it. Of course he was human, and that was he took several bullets and then he picked up by the very, very hot barrel, a 50 caliber machine gun, held it in his arms and proceeded to save the rest of the people in his company.

No one can understand how he could have done that, yet he did it and subsequent to that, he couldn’t communicate it really to anybody, because you don’t have that common reference ground. Anybody’s touched a hot thing, hold it in your hand, you know it’s going to burn your hands, know that you’ve got holes in your legs and things like this … How do you make a machine understand that?

Speaker 1:
You want to take that?

Speaker 5:
I have another-

Paul:
You’re going to respond to him? Can I respond to him briefly, just very quickly. Of course not, I’m not trying to make a machine that has a common experience but I don’t want everyone to have the same experience. I want a conversation. I still think there are ways to make progress, even without being able to do what we agree cannot be done.