Sei sulla pagina 1di 12

So, can we start with you sharing your name and your affiliation?

My name is Manuela Veloso. And I am currently on leave from Carnegie Mellon as the Herbert
A. Simon University Professor of the School of Computer Science. And I recently joined JP
Morgan as Head of AI Research.

Excellent. So, Manuela can you tell me a little bit about how your work in AI began?

So, it’s actually an interesting story in the sense that I was, I’m an electrical engineer. And very
kind of math and engineering oriented, but I manage, I mean I got to do a Master thesis in
Lisbon in Portugal on automation of assembly factories, so we had to put in the computer all
sorts of like information about orders and then a list of parts were generated automatically, and
that Master Thesis with these beginnings of computing and databases led my interest on oh my
God, we can automate anything. And, therefore, I started thinking, oh my God, these computers
can be so powerful. So, it was not that I cared about science fiction. Nothing. It was this moment
in which I realized that, how computers could actually help humans. Another thing I realized in
those days was that a lot of the things that computers were doing, or humans were doing were
very similar and so I developed also these interests on analogies, so doing intelligence by
planning by analogy and that, one way or another, led me to a PhD in Computer Science,
advised by Jaime Carbonell who I’d worked on the relational analogy at Carnegie Mellon
University. And so, it was that path that led me to Carnegie Mellon where I did my PhD back in
the early nineties.

Excellent. So, can you speak a little bit about your present work?

Since I actually started this PhD in AI, I developed an understanding that AI is a complex
discipline. And, through actually the inspiration of Alan Newell and Herb Simon and Raj Reddy
and Takeo Kanade and my young advisor and Tom Mitchell, I became completely fascinated by
the integration of perception, cognition and action as a demonstration of what intelligence could
be. And so, from very early on I’ve been trying to develop for all these years autonomous mobile
robots. Autonomous executers of plans. Autonomous perceivers. Autonomous creatures that
can merge this capability of processing data in the form of sense data of thinking about their
objectives and eventually actually executing them. So, my work has always been in these
autonomous robots, autonomous mobile robots, and also very early on I was exposed to this
robot soccer through actually, through one of my students Peter Stone. And the problem of
actually doing these AI as teams in adversary environments led me to introduce this concept of
robot soccer. And that, I actually think that Carnegie Mellon was very receptive of this new area
while robotics was traditionally much more like large robots or even like autonomous robots on
the road. Now came this robot soccer with these little robots playing in a team against other
teams, other robots, other Universities. It was really, it has been an amazing journey. And then
later on, much later, I became, how do you say, became very interested on the problem of these
autonomous robots in environments that were human environments. So not really the aspect of
going up to Mars or going down volcanoes. But the fact what if robots are not in the playing field
of robot soccer. Which was, it’s a limited kind of field, very specific task, but what if they are in
our environments. And we had at Carnegie Mellon a long tradition also through Reid Simmons
of having like experience and also museum tour guides with Illah Nourbakhsh this concept of
having like robots in human environments, it was not new. The only thing that I envisioned was
that they would move in the environments without any humans following them. So, which would
like this complete autonomy. And that took a very big step to really not have much students
follow the robots in these nine floors of a building. And in particular became this question that
they have limitations and in the, and then, one day I realized that, you know, AI and these
autonomous robots in particular would never be perfect. And so, I kind of introduced this
concept of symbiotic autonomy in which these AI systems in particular our Kobot robot would
ask for help. And ask for help from people, ask for help from other robots, and ask for help to, by
going to the Web to understand things. So, it was fascinating adventure to have these robots
move around. We’ll let them go and we just know that if by any chance they encounter a door
closed, an elevator they have to call, a blocked hallway, that they can really tell a human press
the elevator button, open this door, put something in my basket, get out of the way so I can go.
Thank you very much. So, it became like these empowering the limited AI agents, AI mobile
robot with the surroundings that included humans. And in particular humans. And so that
actually led me to understand something that I think I’m well-known now for my optimism with
respect to AI because AI in the future, AI in the society, because I do believe that the challenge
is these being smart about understanding this human-AI interaction. So, it’s not really about AI
taking over. It’s not really about humans not having any help. It’s really about these AI and
humans coexisting. And that has been what I pursue even as of today, and as we do this
human-AI interaction, my focus is always on the AI side.

Sure.

To make sure that the AI side embraces the, I mean the opportunity of having humans around.
And also, services humans and areas of being more transparent became now part of like how
do you include values, transparency, explanations into the algorithmic nature of what runs on
these AI systems. So, it was not just the mobile robots, it’s in general an AI system. And I can
tell you one thing that I feel very strongly about which is we in computer science, in AI, have
been using algorithms to solve a problem. What’s the shortest path? And we will generate the
shortest path. What’s the next value? And we would make a prediction. And through machine
learning, through anything, we are being, we have been like problem solvers, but not our code
doesn’t explain why this is solution. So, we don’t write algorithms to say, yeah, this is the best
chess, the move in chess because I mean I did this alpha-beta search and I looked at this many
alternatives. We have never been in the because business. We have not been. We have been
in the problem-solving business and the solution-generation business which has been a
phenomenal kind of like challenge, but it’s almost as this human-AI interaction puts us now from
an AI point of view, from an algorithm point of view, in haha hold on a second. Someone might
ask you why are you telling you cannot have a loan? Why are you telling you have to take this
medicine? Why are you telling you that you have to pay this insurance? Why are you telling me
that to stay in this hotel? I mean any decision that now AI systems can help humans with is a
potential for a question from a human why or what if that or why not that? And our algorithms
and this is where my human-AI interaction is always in some sense focused on the AI side, our
algorithms are not ready and we are trying to devise from our cryptic algorithmic kind of like
levels of levels of backprop on some kind of neural net, really why did this thing learn.

Yeah.

Or even like some search algorithm. Why did you find this? Was it finding a needle in a
haystack or all the solutions were all about the same and you are choosing these as a random
choice between equally good solutions? So, there is so much that the computer and the
machine and the algorithm and the AI system knows which is not pulled out by our current
programming technology. So, that’s what I believe that, so this is a very deep area of interest for
me and I think it will make AI be amazing if we change this paradigm to be more open for,
because it’s there in the algorithm, it’s not like it’s not there. And so, it’s more open for sharing
what’s going on inside of these algorithms.
I’m curious about the translation of the work that you were doing in regards to robo-soccer to
this. Because one of the things that I’ve always found striking about the, that I was familiar with
students participating as undergraduate researchers in your lab and the ways in which they
would discuss the parameters of the game, soccer, which has very specific rules analogous to
other games like chess, but also the element of decision-making that when people are playing
are oftentimes ascribed to creativity in play because you don’t have a coach calling plays in a
human game in the ways that you would perhaps in American football or in elements of
basketball. But this idea in your current work on opening up further transparency in the decision-
making processes within the automated systems, the ways in which it sounds to me like there
was a, the movement from the seemingly static rule-based work in the soccer arena that was a
relatively closed system but still open to elements of, not chance but various decisions do I chip
it, do I not chip it? Right? By virtue of the play. And this notion of collaboration between the
robots as well in this team environment. I’m wondering if you can speak a little bit about the
translation of some of the findings or translation of some of that work that led you to this current
work in terms of the transparency of decision-making and these curiosities.

A lot of the challenges of such autonomous system is the uncertainty about what the adversary
is going to do.

Yes.

And it’s something we have not been very, how do you say, successfully modeling necessarily
because these robots all do whatever. It’s not that it’s really humans playing that might have a
strategy, might have a coach, might have some kind of past events you can learn from. So,
there is a lot of uncertainty and we have always played these games or these type of strategies
in a very kind of beautiful way. We introduced a problem, we introduced a playbook which was
very, very, very large. And basically \ these robots stochastically choose between different plays
as a function of really like outcomes of past choices. So, there is that kind of like adaptations,
stochastic adaptation. The thing that I want to just say about robot soccer in fact that I find it
very fascinating now, it’s like, you know, we spent a lot of time thinking that after all robot soccer
is only a game and the real world is different and AlphaGo is only a game or Go is only a game.
Always this kind of separation. Oh, the real world is much more complicated. These are all
these games with some rules and so forth. However, I think that the future of AI one day will
include I mean artificial, will include humans practicing with AI systems. Everybody plays with
chess programs now. Everybody now. One day people will actually play with soccer teams of
robots to practice. Or for example, I love playing squash, right? So, I mean unless someone is
ready at the same time as I am, I’m not going to have a chance to play squash unless I just
want just to practice playing my rails. But that’s not really very exciting. If there were robots
available, you know, to play squash with you, this whole paradigm would change. Or tennis or
any kind of other, any other sports. I actually value a lot even if I would start now my career,
maybe I would have like developed like these sports playing robots. Because even like when
you went to practice, I mean, it’s not there’s just these tennis ball going, maybe left, right, but
not having an intelligence, what about playing with an intelligent tennis player robot. So, one day
maybe some industry will find that it’s really an interesting domain for AI, for robots. Not just the
thinking kind of AI systems like chess or eventually Go or checkers or backgammon or whatever
it is, but the physical aspect. And I think it would be very nice one day to have actually robots
that play soccer not necessarily to beat the human player but to practice with them. And again,
the same, but with intelligence, not just machines that just challenge you without thinking.
Personalization, adaptation, all sorts of like. So, I do think that that’s an area like we humans
play sports and we also work, and we also cook, and we also take vacations. I think that
humans know very well the difference between how they think as they play sports from what
they do in their work. I think that that will exist forever. And AI, this human-AI interaction I’m
talking about in some sense covers more of the life of people. The decisions they make. Not
really only the sports they play which is a separate kind of area. I want to emphasize one more
thing with respect to AI which I think that it’s important to understand. Unfortunately, people
sometimes think AI is machine learning and machine learning and AI seems to be used
interchangeably. But that is something that it would be good to understand. From my point of
view machine learning is one of the components of AI. AI is a science of putting together
components, a division, the planning, the searching, the machine learning, the interaction with
humans and trying to make the whole loop closed so that something can be performing
intelligently by an AI system. That’s one thing that I feel very strongly about. The other thing is
that AI is not a one-shot business. AI is a science that, you know, you, our robots, our Kobot
robots they learn by interacting with people about like or going to the Web? And they know
much more now than they used to know, you know, two years ago or three years ago. So, they,
unfortunately in our technology, I say this many times in my talks, we do not, we are not used to
buying technology that does not work. We buy a refrigerator and it works. We buy a car and it
works. We buy a vacuum cleaner and it works. Everything works. It’s an unfortunate situation
from an AI point of view because we don’t see anything getting better by interacting with us. The
refrigerator doesn’t get better. Things will always freeze. There is a little control we can
manually change, but it is not of the sense that I can see the thing grow as I put more
vegetables here, more eggs there. Nothing gets better in the refrigerating system. A car, the
same thing. Maybe the seat adjusts to your height and maybe. But that’s it. There is not, nothing
gets better. AI is going to be a technology that people will be able to use and they have to get
used to the fact that maybe the AI system today, based on the current data, makes the wrong
decisions, make the wrong advice. But if we have a real AI system, we can give feedback and
maybe tomorrow it will do better. We can give instruction. We can interaction with that thing in a
very kind of like incremental way. And expecting that an AI system today who makes decisions
about loans, no loans, whatever decisions, makes the right decision on day one is a mistake. It’s
just a mistake.

The subtlety of that notion of an iterative system.

Yes.

That is likely.

Yes.

More likely then, less likely to fail initially.

Yes.

But has the capacity.

Yes.

To build out its capacities

Yes.

Further in oftentimes exponential fashions, is a very different relationship for humans to tools.
That’s right.

Or humans to technological iterations that we’ve had.

Exactly.

In past history.

We have never experienced a human discovery, a human technological discovery be the will,
be it like whatever it has been in the past in which when we actually interact with such discovery
there’s a process of your own input, your own interaction with a thing, with a technological that
the technology, so that you have an input, you have a say on where the technology goes from
interacting with you. So, I call these aspect that I find very important about learning from
experience. So, and it’s not probably just the reinforcement learning system, but it’s something
that you can instruct, you can give feedback, you can make it forget things, you can make like it
biased towards some sides. You might want not to just use the data, I want to give you a
principle. I want to add something. We are in the infancy of understanding how our algorithms
are going to be so robust and so verified to be in this interaction. So, you know, in summary, in
fact, the challenge here is this human-AI interaction and throwing into society. So, artificial
creatures, these AI systems, which are not dogs, they are not cats, they are not people, they are
these AI things, however, they are intelligent. And they are intelligent, and they are part of what
we actually want to use to improve humanity. And just one final thought, people kind of like
think, I mean we really need AI and why, I mean is, should we stop AI, should we grow AI? I
mean, two things, first of all we cannot control AI like that because AI is something that anyone
that has a computer one way or another which is pervasive. It’s not the special device.
Computing. This is a natural path of computing going forward. So, it’s not really easy to just put
the rules that what we should invest is on educating people to use these tools well. But it’s not
really something that it’s very easy to just control. But the other aspect also of this AI revolution
in some sense is that it is incremental and it’s not that also one day we’re going to wake up and
AI is there. No. It’s every day there is something else. Every day we type an email and there is
completion. Every day we go to some website and there’s one more way to be smart about the
way you do some reservation. It’s fascinating to witness how 2018 is at the end of 2018 and
beginning, how many more apps we have on our file, how many more data, and the third thing
that is actually navigable is like this, think about humanity. What did we became experts on? We
became experts of producing data. Everything we do now is digitalized. Our Fitbits. Our credit
cards. Our conversations. Our pictures. I mean, what is not available in a digital form? As soon
as we make all these available in the digital form, there are two things we should realize. First of
all, no way humans are going to be able to themselves process all that information. What, we
put like humans looking at all, everything that we create? Even we know that ourselves when
we take three thousands pictures on your iPhone, you struggle to find the one you want.

Yeah.

And you basically thank goodness they are linearly ordered in time. And you note this was in
2016. And there you go down and you reduce your search. But I’m just telling you, imagine
images of some city or imagine the images of some scene or imagine the images of some
hurricane. I mean there are humongous amounts of data. So, AI is because we actually need AI
at the same time that we do all these data producers, our devices produce data and we cannot
process that data without the development of these AI systems. That can in potentially assist
humans and definitely tremendously benefit the support for human decision, the support for
human interactions, the support for human beings to actually make advances in our societies.
Yeah, so I’m really struck by this recognition of the incremental development of AI.

Yes.

And I’m wondering what you see as your role as a communicator to a broad public but also as a
faculty member who has been working with undergraduate students.

Yeah.

As well as PhD students.

So, it’s a very good question. I mean I actually have to tell you that I’m extremely happy and we
are all at Carnegie Mellon with this new undergrad degree in AI. So, we kind of like
acknowledge that of course this is part of the computer science discipline, but it has a, an entity
of its own. Because now it’s not just about having computers do beautiful programming, having
computers have excellent algorithms, having computers have great hardware, great operating
systems. It’s this concept of how do we make all these things intelligent. And this AI feature that
requires students to basically learn not only the algorithm but also the ethics of having these be
part of society. It’s like creating a generation, we are starting to create generations where we
want to them to understand about the power of computers to be intelligent. Not the power of
computers to process data very fast only. It’s this other aspect. And I actually think that one of
the most exciting things happening at Carnegie Mellon has to do with this AI major. And we,
faculty in AI, to realize oh my God, this is our chance to educate these generations. Along the
lines of all the way from the technical, mathematical, statistical aspects of computing and the
societal aspects. So, our major includes as a required course, ethics, which we may even like,
one day even like, you know, revise it and refine it to be ethics in AI and also its of ethics in
computing, ethics about multiple cultures. We keep like, we are going to grow that area because
that’s what AI needs to face. Is that it has to go to the world and do it. At that same time that
also I think something very good about the AI major is that it’s very much also project-based.
So, AI it’s also something that’s not so general. You need to think about projects. You need to
develop problems and try to solve them in an efficient, intelligent, novel way. But a problem. So,
we have project-based courses, a project-based course, we have capstones if needed, we have
ethics, we have all the math, all the science, all the engineering, all the actual statistics, all the
tools needed, but we have a different view of what they are going to come out. So, I think
undergrads, Masters students, Masters and PhD we already had a lot of AI kind of research. But
at the undergraduate level to embrace this AI is really a discipline, a new area, is, is really
fascinating and I think that that one of the greatest contributions of the faculty at Carnegie
Mellon is to put AI as an education area within our societies.

Yeah, so to speak, so in regards to the project-based work in the AI undergraduate major, but
also your more recent work that you really described the manner in which these developing
autonomous systems are in a symbiotic relationship.

Yes.

With the human context, what do you think is important in terms of the next generation of
technologist, or how do you lead as a model for the next generation of technologists in
communicating to a public about these systems that are developing or operating in a human
context? Whether that’s an undergraduate project or a project that you’re leading, with your own
project.
Unfortunately or fortunately, I mean unfortunately we don’t have yet is that we don’t have many
robots around. We don’t have physical robot. I think people got used to their cellphones. They
got used to their Google search. They got used to their email. They got used to all these digital
tools and they are happy to see them becoming smarter and from Alexa’s to all the way to
completions of emails. But you see there are no robots moving around anywhere.

Yeah, very little embodiment.

There are, there are. I mean there is at Carnegie Mellon the Kobot that are some, somewhere
that, I mean it’s not, I mean I came all the way from the airport, I’m in New York and I basically,
you don’t see any robot anywhere, so you are expecting to see the robots but where are they?
So, I think that the same, so, one day there will be some discussions about this aspect. I
foresee that they need to be around, I mean it’s not only the cyber world that become intelligent.
I think our handling of the physical space be it for disaster rescue, be it for construction, be it for
all sorts of like other needs that we have in the physical space, will come one day. I just think
that people are not as ready or are not as prepared, but I am. I mean in the sense that I’ve been
like trying to leave Kobot move around alone, so that people see the thing moving. And I
usually, I tell you every time I give a talk to the general public, I usually give them some
homework, everybody knows, buy a Roomba. I always say this because the Kobots are not
available for sale. The Roomba are. And it’s not because I have a special interest, but I love the
Roombas. And the reasons, Alexa’s, I also love Alexa’s, but the Roombas move by themselves.
I think it’s fundamental for people to start seeing something that moves by itself and you know
what happens, turns, disappears from your sight. And it’s gone.

So, you see that it’s.

It’s gone.

Capability, but also its limitations.

But no, you see.

In your home.

That it’s not, that it’s on its own. You see that through the intelligence that you cannot just have
in your pocket like a cellphone. It just goes. And that’s the experience for example with Kobot. I
mean, I really tell you that it was a major feeling when that thing went all the way down my
corridor, on the way to my lab, and disappeared. Turned right. And I felt like following it, you
know, going all the way and saying “Where is it? Where is it?” And then I could still hear it has
this noise of the motors I could still hear “Brrrr” I could still hear and then nothing. No see, no
hear. And I immediately go and grab the phone. Joydeep (graduate student), did the robot
arrived to the lab? And we’re like panicking. Where is this thing? That moves by itself. And it’s
on its own without anybody behind and, you know, you could go to the web and check where it
is. In those days we didn’t have it connected yet to the web. So, we literally did not know where
the thing was. But the fact it went, and interestingly, you know, it was another, you needed to
adapt to this thing of seeing things move by themselves. And I really think that AI will get to that
point. I mean inevitably people will have to acknowledge whether it’s autonomous cars, whether
its autonomous robots in our human environments, whether its drones flying in the sky, that they
are moving by themselves. Making decisions that eventually humans gave them destinations or,
but it’s still that motion, the physical space being navigated by autonomous robots. I have a big
dream that this will happen one day, that you really enter this airport, or you enter this, you
know, shopping center, and there are these things that are not people, not dogs, not cats, but
they are these mobile robot. And I tell you another thing that you think, I mean I’m sure there are
many people in robotics feel this way, there is not a single day that I enter these Trader Joes or
Whole Foods or any place and I don’t think why don’t these shopping cars move by
themselves? Why do I have to push these things? They have wheels. I have a cellphone that
can connect with me. I could log in and that thing follow me wherever I go in the store. Why do I
have to push that thing? I don’t know. Why do I have to push my suitcase? I don’t know.
Everything that has wheels in some sense could really be traversing the space by themselves.
There is no reason for me to not send my suitcase, go to the gate, I’ll be joining you right now.
I’ll be there in a second. No. So, I am at that level because of all my mobile robot level, so I care
about the web, I care about all the apps, but I cannot wait that something at that level would
eventually one day be also part of our AI conversations.

I find it really interesting that rather than being influenced by science fiction, which many
technologists have described.

Yeah.

It’s influence. That you often pointed to your work in engineering and this Master’s thesis, and
also this idea of the cart that follows you or the suitcase that leads towards your gate at the
airport. We don’t, you know, by and large use porters often and anymore to move our luggage
necessarily. We have, you know, a self-driven economy, but I’m wondering if you can share a
little bit on your observations on the ways in which you see AI systems having changed the way
that people work up until the present moment?

We have been questioned about the AI taking jobs for you know.

Sure.

For several months.

Yeah.

Or several years.

Yeah.

And I finally came to resolving in my head what’s going to happen. You know, especially also
with my experience at JP Morgan and understanding the real world, you know what, I think it’s
going to be humans that require these tasks to be done by AI. It’s not AI, it’s going to be
humans are going to say, I don’t want to be writing all these numbers in this Excel page by
myself. I don’t want to be answering phones asking what is my balance or something. Why not
have computers do it for me? It’s going to be the same way that how would you feel that if I tell
you rent this place, there are no washers and dryers in the whole thing. You have to wash
everything by hand. You are going to say come on, give me a break. Why am I supposed to do
something that I know machines can do? Why am I supposed to go to a place that doesn’t have
any dishwashers? Or why am I supposed to go to a place in which to connect phone calls I have
to be in a switchboard? You start realizing and humans are amazing at realizing the things that
actually, you know, AI is going to be able to do it for you. It’s like me when I ask like I have to do
the bibliography search for some kind of topic. And I resent the fact that there is nothing that I
say tell me everything that is talked about this particular topic. And I have an AI assistant that
really goes and mines all this data versus me like going and trying to find all the papers and
then I’ll read some for sure. So, look at the doctor, look at the lawyer, look at the professional.
When are we going to not want to have an AI system that look at all the cases of these diseases
in the world and compare with this problem and advise me of what to do? So, that’s the point.
There will be not AI taking over jobs. It’s people requesting that AI do the tasks that they find
can be actually done by machines.

So, if I understand correctly, you really see it as this symbiotic relationship...

Yes.

That empowers people.

Empowers people.

To identify the tools perhaps a...

Yes.

Retrieval system.

Yeah, and they might dream of things that we as technologists cannot do yet. And they might be
dreaming I wish this thing happened and it might not be the case. But they might also dream of
things that we actually can do. And that we actually will, you know, will advance because of the
request of humankind to be able again to close this loop to be able to process this tremendous
amount of data that they know exists. But they also know humans oh, they cannot do this. And
then let me just add one thing and the people keep mentioning what about like there are jobs
like cleaning bathrooms, making the.

Service jobs.

Service jobs. You know what, why don’t we invent a self-cleaning bathroom? Why do we have
to keep those jobs alive? I mean maybe people are going to say you technologists, I don’t want
to do these jobs anymore.

Do you anticipate that need being, if by virtue of that need being met...

Yes.

Let’s say the self-cleaning bathroom.

Yes.

Do you imagine that that will lead to emergence of...

Other jobs.

Other industries.

Other industries.
Like the repair person for the self-cleaning bathrooms.

Yes, other industries. Or the designer or the constructor.

Okay.

Or, the, you know, just think about the transition from the telephone operators to the electronic
switch. I mean all these people became something else, so what I’m trying to say is like this,
what about humanity looking at this AI as the chance they have to actually free them from the
tasks they don’t want to do. Because it’s not that everybody has good jobs in the sense of like
loving to do what they, they do. There are jobs that are really hard and probably AI can even
help with those. AI, robotics, who knows. And maybe we will converse with society, have an
economy of talents in which people in fact shine in the things that they want to do. And they
might even like jump around and do different things and reinvent themselves because we also
get tired of doing the same thing over and over. So, even if we love what we do, you know,
when I was teaching here, I mean I wouldn’t mind a robot in the morning Tuesday at ten to just
go and teach instead of me today. I don’t feel good today. I didn’t sleep last night. I was too
busy. Do it. Oh, fantastic. I can teach all the others at this time. Or for example, gardening. I
mean we have to, or shovel snow. It’s fine to do it once, twice. Every day it becomes a little bit
less appealing. So, we may want to have this concept of oh I wish there was some robot that
will help here. You know it’s, this thing that, it’s nothing to take what we like to do as fun and we
still can do it, it’s like Herb Simon used to say, when Kasparov lost to the Deep Blue, it’s not that
all the people stop playing chess. People still like to do it. So, I think that, I don’t know. Maybe
like someone told me one day Manuela you’re very optimistic, which is true, but it’s, you have
also, you know, there is this concept of being realistic optimistic and maybe I fail a little bit there.
But, you know, it’s, my realism is that in some sense I believe there is a transition between the
telephone operators that we’re jobless one day and AI can make people out of jobs, but I see it
more of a transition and I think that somehow governments and companies have to be
responsible financially for reeducating these people. In the same way that we AI are actually
providing tools with online learning. With all sorts of like social networks. All abilities to search
online for jobs is, I mean, these are all tools created by the technology itself too. So, I’m realistic
but I think that we are in the right path. So that’s why I’m optimistic is because we are trying to
help society along many dimensions.

Yeah. Within the context of the symbiotic relationships that you shared and also your own work
in developing these semi-autonomous or autonomous systems, in this boundary time, where the
way that you, you have configured these relationships to the systems is one that seems to
empower people and really empower human decision-making. But I’m wondering if there are
examples where the autonomous system may compromise a human’s ability to make a
decision?

It’s a good question. I’m not as sure about these possible negative interactions with humans, but
I tell you one thing though. How many of us now look at the sky to know what’s the weather
like? We have weather.com and we became weather.com users. And we became, I mean, you
know, I’m from Portugal.

Yes.

I spent, I remember my summers as a child with these fishermen and we would actually go to
the beach to ask what’s the weather like tomorrow.
Yes.

And they would look at the horizon and like how pink it was and whatever the fog was, and they
were excellent predictors. That skill is gone. It’s gone. Nobody asked them if it was going to be
gone, but it’s gone. I have a little bit of it because I learned still. But my kids. They don’t know.
They never interacted with those fisherman. They really predicted this weather beautifully. So,
you see a lot of these things that unfortunately or fortunately or how it things goes, that our
technology, you know, has removed skills from people. So, for example, we do not do square
roots by hand anymore also. I can still do it, I teach my, even my grandson I’m trying to teach.
He’s very young. But this thing, square roots by hand, my father knows how to use all sorts of
rulers, sliders, so it’s normal. So, we have to just accept that this technology also did not fall
from the sky. It was invented by us. So, we keep inventing technology, our children, our
neighbors, our, all our people are all creators of new technology. And when new technology
comes, probably they’ll overwrite skills. I mean that’s the way it kind of has been throughout
history. And we don’t, we don’t eat with our hands anymore necessarily. Woah, woah you could
say how come someone invented the fork and the knife. I mean that’s not fair. We were
supposed to use our hand. Well guess what, the moment the fork and the knife are there, done
with the hands. Right so, think about also like in the fourteen hundreds when Gutenberg
invented the printing press. Reading became a skill taught in the schools. There was not
reading before the actual Gutenberg. Gutenberg invented this kind of.

Books to enable more people to read.

Yeah. And then reading and writing became something. That was actually a product of
Gutenberg’s technology, not just the monks, not just the scientists, it’s beautiful. And again, I
think if we are living these days by having computing, AI, all sorts of technology of data being so
pervasive, in which in fact in our schools also, it’s a new skill that people have to gain. What is
all this data about? What is all this computing about? That’s, they don’t, we did not all became
Shakespeare. I mean even if we learn to read and write, some become Shakespeare, and some
are not. Same thing, we have to teach these kids this tool, this ability to handle data and
computing and so forth. It doesn’t mean they will all become computer scientists and AI
researchers and AI developers. But still, it became part of the life of people. So, I really think
that it’s a technology kind of like a technological kind of revolution that is very beautiful, very
normal within the path. It’s going within the path of computing. I mean we cannot go back and
tell Alan Turing that’s it. Or Von Neumann, no. Don’t make this machine. We can’t. We can just
not go back. Or tell Herb Simon, don’t think that machines will do things like human problem
solving. No. It’s there. It’s there. It’s like electricity. Imagine saying no, everyone now uses
candles.

Yeah.

No more light. At least, even like gas, but no, even oil. But no electricity. So, it’s kind of like,
people should appreciate it, I mean in a sense and try to make the best out of it to improve our
humanity. So, it’s, we are in just in that beautiful kind of 21st century, beginning of 21st century
moment, but, you know, a hundred years from now it will look like what horses and airplanes or
something looked to us now.

[...] So, I’m curious about what safeguards do you think are important in terms of ensuring
against harm with the autonomous?
Okay, so very good question. I really think that we have created a technology that because of
actually this it’s pervasive kind of like device which is computing with all the off the shelf things
that we are producing it’s actually something that can, that can be used by anyone for any use.
So, it’s how can I say, it’s dangerous. But it’s as dangerous as making guns available to people
or some of that level. So, I tell you something, I really think that we have to regulate one way or
another, what becomes public and available. One way. Look at this, the pharmaceutical
company does not let anyone just come up with their own drug.

Right.

Now, you know, CVS can sell it. Now there’s a process, there’s a process for food, there’s a
process for drugs. There’s a process for quality control of cars. There’s a process for everything
that is serving society from a technology point of view or from a resource point of view that is
eventually checked. It doesn’t mean that you cannot have an accident in a car, but it’s still the
case that it’s checked. So, I wish that we developed ways of checking if actually the AI products
that are being available and made available can be checked one way or another. And it’s very
nice to have this type of discussions because you know we’re research society and research
community there are people that are very much dedicated to study this problem of verification
and proof and checking and quality control and all sorts of like, how do you say, measures to try
to see, to try to have with the least probability harm coming from these tools that AI can
produce. So that’s really important to push forward. I think it’s a hard problem because while for
example in pharmaceutical research you have the mice in which you test, you have the people
and whatever. It’s more constrained in terms of like the target. This AI seems to be
overwhelming, I mean, in the sense that it can apply to anything. To elections. It can apply to
actually influence you anywhere. It can apply to the decision support, so it’s, that’s what’s more
fearful to me. Is this unboundedness of AI. The fact.

That brings me to the transparency of processes.

Yes, exactly

And because then perhaps that’s a safe guard to you.

And that I do think that the only conclusion I have for all these is that we better become a better
humanity. Just humans. Because harm doesn’t come from the technology itself. Harm comes
from the bad use of the technology that humans invent. And so, if we would just be able to have
people live happily and accept their different kind of cultures and accept that the Earth needs to
be preserved and that accepts that there are this enormous inequality that needs to be bridged.
I mean, there are so many things that we could invest our efforts on rather than use poorly the
technology. So, it’s a call to humanity, the power of this technology in some sense. We invented
it, now are we going to be able to live with it? Well, that’s a challenge for us and I’m very
confident that we actually can address this challenge in a good way.

I think that’s a lovely place to stop, so thank you.

Potrebbero piacerti anche