Sei sulla pagina 1di 9

So, Professor Jennings, welcome.

Could you just start by stating your name and your position
and title?

I’m Nick Jennings and I’m a Professor of Artificial Intelligence here at Imperial College London.

Excellent. And, you have an administrative role here as well?

I’m also the Vice Provost for Research and Enterprise.

Thank you. First question is kind of biographical, how did your interest or work in AI begin?

I got interested in AI actually as an undergraduate. I did computer science at, as an


undergraduate and we got exposed to some introductory basic courses on AI and I thought this
is by far the most interesting area of computer science and sort of I wanted to find out more on
and then I went to do a PhD and then sort of the rest of my academic career followed that broad
trajectory. So, you know, for thirty years, I’ve been an AI researcher, through the good times
and the bad times I would say.

Was your interest in AI, sounds like it was influenced by, by education, by the classes that you
took as an undergraduate, to start with, was it influenced by popular culture, such as science
fiction or television?

Probably less so actually. So, I’m not a, not a great film buff or particularly interested in sci-fi,
but, sort of, I have subsequently gone into that to see a bit of what, how AI is depicted, which is
of course generally badly in most of these things in terms of its often the bad in various movies.
But I came at it as a computer scientist and for me it was the most interesting bit of computer
science which I was quite interested in at that point.

So, let’s dive a little bit into the part that interests you the most. Can you describe a little bit
about your past and present work in AI?

So, I’m particularly interested in AI systems and where there’s more than one actor. So, more
than one AI system interacting with another one. So, that’s the area of multi-agent systems, so
you have multiple agents that interact, and they might compete, they might cooperate, they
might coordinate with one another to get things done. And so, for most of my career I’ve been
interested in systems where all of the actors were computer software, but laterally in the sort of
last five to ten years, I’ve been interested in systems where some of those actors are humans
as well, so I’m interested in systems where there are mixed teams that form, coordinate, do
stuff, and some of them are people and some of them are intelligent agents.

And when you said intelligent agents, I’m curious whether they can be physical instantiations,
like robots, or they could also just be software, or both?

For me, most of my career they’ve been just software, or they have been software. Laterally in
some of the work we’ve done, they’ve been hardware, so they’ve been a manned aerial vehicles
or drones or they’ve been robots, or they’ve been sort of IoT devices. So, for me, they’ve been
all of those things.

And is the kind of key AI intellectual challenge and research question similar whether they’re IoT
or drones or software?
Yes, at a certain level, I’ve been interested in the sort of the science and the engineering of
those things. So, I’ve built multi-agent systems in a whole variety of applications, you know,
from disaster response to supply-chains, to smart-grids, so, I’ve always tried to do a mix of both
theory and the practical application of it, so the theory bit I’ve particularly focused on is on the
interaction piece, so I’ve been less intrinsically interested in how an individual agent can
behave. I’m more interested in how the group can behave, so I’ve been interested in algorithms
and methods and incentives for coordination, cooperation, negotiation, those sorts of things.

So, it’s interesting because your research is about communication and coordination between
these agents, my next set of questions is actually about communication.

Well there you go.

To publics. So, they’re kind of related. And one of the questions we’ve been asking is can you
describe an example you can think of, of a good communication or a useful way that AI has
been communicated to publics?

Good ways are harder to find actually. I see lots of bad examples of how AI is communicated to
the public in the sense I see quite a lot of high-pan sort of, almost science fiction-like reasoning
about things. I think, sort of, the communication that I’ve enjoyed, and I think is good is the bit
that sort of demystifies AI, talks about AI being a here and now thing, so we see small bits of AI
in a whole variety of areas, and sort of, I like that being grounded pragmatically, and then, sort
of, with some of the vision of what might be. So, I think that’s what good communication around
AI should look like.

And as a researcher what do you think your responsibility is in terms of communicating to the
public about AI?

I think it is very much communicating along those sorts of lines. So, I’ve done quite a lot of
outreach work, I’ve spoken at the Royal Institution, I’ve spoken at schools, I’ve spoken at events
in pubs, and things like that. And actually, I think it’s important to engage on a topic like this,
which does capture the public imagination, so, some science is quite difficult to get the public
interested in. I think, sort of lots of the publics now get sort of AI and smart machines, but they
just don’t really understand what’s involved in it. And they sort of, they imagine they’re building a
general AI much like they see in all the movies is what’s going to happen. And then they’re in
the realms of science fiction as to, okay, if you could build that, you could imagine AI doing this,
and you could imagine AI doing this and so on. And I just don’t, I think the communication is
about saying what’s hard and what’s difficult and why that is some distance away if, you know, if
it’s ever going to happen.

Are we training our students to be able to engage in that kind of outreach that you’re describing
that you do?

This is so much more a part of science these days. So, here at Imperial, we run our own
science festival so that sort of weekend we have 20,000 visitors a year to it, and it’s a staple
part of what early career researchers do at all aspects. And it’s also part of standard PhD
training. So, for all areas, we really do push sort of communication and outreach activities, and I
think that’s a good thing.

The next set of questions I have are about labor. And the first one is can you talk about how you
would say AI systems have changed the way that people work up until now?
So, I see AI as mainly something that’s going to work alongside people, so to augment and help
them. So maybe I’ll talk about a specific example of an application I was involved with. So, I
worked with a disaster response charity called Rescue Global. And what we, and they’re people
who go to the site of a disaster trying to be the first boots on the ground. So particularly we
worked with them around the Nepal earthquake. And we’ve been working with them for a while
developing the tech, and the sort of system that we developed was really a hybrid system that
used their innate human expertise and put alongside it some useful AI augmentation, so some
of the things that we were able to do was that we used AI algorithms to make sense of lots of
what was going on in social media, what was being said, what images were being processed,
and sort of, you know, at that mass volume which is what happens these days after a disaster.
And so the AI was able to sort of draw up a picture, an assessment of what the ground looked
like, so where were the most damaged areas, where were the areas of population that we’re
most worried about, and then the humans were able to interrogate that and add what they knew.
So if they went to a particular place they’d say actually this isn’t quite as bad as lots of people
have been reporting and therefore you could sort of ripple that sort of information back. And
then when they’re planning the logistics of what response teams with what equipment and what
supplies go to which places in what order, actually computers are very good at that and so that’s
something they used to do by hand, and we developed some planning and optimization
algorithms that did that for them. And so, it gave them a first cut at this is, this is the way you
should send the teams in what order. And then they changed it to, they modified it so it, sort of
was a first draft if you like and, actually, they would say, well, you know, we know because we
have some extra information that the machine does that actually this should be done before this,
or this should go with this, and so the plan evolved through that interaction with them. So,
you’ve got a smart computer algorithm and a smart human working together to do that better
response.

Interesting, your example doesn’t take disaster recovery and replace it with kind of a giant AI
solution, rather it takes bits of the logistic challenge of dealing with data or dealing with
constraints and suggests AI as a way of helping humans resolve those constraints.

Yes and that’s very much how, I think that’s how AI is going to work in the majority of cases, so I
think sort of AI will do some things, it will, that we currently do, and, you know, it did some in that
example of Rescue Global, that’s something that someone in the team used to have to do, and
the computer actually does it better, some of those things when it’s modified with human input.
But actually, it lets the people on the ground focus on the things that humans are good at which
is talking to people, assessing the situation, figuring out what’s going on, negotiating for access
to this area or to that area.

So, looking forward a little bit, just the next couple of decades.

Just the next couple of decades?

Yes.

Short distance in time.

Fair enough, fair point. Do you think, when you look at how AI will change the way people work
in the near future, is it along the same lines, do you see any kind of trends that we should keep
our eye on?
So I think that this augmentation, so working alongside of humans and AI in partnership with
one another is going to be the dominant mode in which AI will come into our workplace and into
our lives more broadly. I think there will be activities where it will be whole jobs that are replaced
by AI and I think it sort of, that’s always what happens with new technology. And then there of
course there will be new jobs that AI will create that don’t exist now, but out of those three I think
the main impact certainly in the medium term is that augmentation and helping us with tasks.

Any tensions that you might expect in regards to human dignity as we introduce AI into labor
markets more broadly?

I think it’s important that when AI is introduced that actually it’s introduced in a way that is
appropriate. I don’t like some of the speculation that sort of humans will just do what the
machine says, so I think that’s a dignity question for me, I, when I talk about AI, I talk about
partnership and actually I, you know, partnership is a word I thought quite hard about to include,
and actually, you know, I mean it in its full sense of really working together and sometimes the
humans will take the lead and sometimes the machines will take the lead with particular tasks,
but they’re working together. I think, sort of, there are absolutely issues with people becoming
subservient to machines and I think that would be a very, I think that’s a very inappropriate use
of computing in general, irrespective of whether it’s AI or not.

So, partnerships suggest more of a peer relationship than...

Yes.

Subservience.

Yeah.

And, I guess I have a similar question which is how might AI tools, even in partnership with
people, affect power relationships between people and other people?

I think sort of AI will give us actually the ability to democratize some services that if you’re
wealthy enough at the moment you’re able to get and pay for, so some of the bespoke financial
services, some of the bespoke sort of health activities that are, you know, if you’re of a certain
income, you can afford to pay someone to do that for you, whereas many people cannot,
actually I think that sort of AI will even that playing field, and will give lots more people, sort of,
access to personalized and tailored services that are, that fit them and their circumstances.

That’s interesting. It’s the idea that I guess the bespoke economy is open to the wealthy right
now. You have to have capital to get customized, high-touch services. And I think you’re
suggesting that AI can play that role at a lower cost point.

Yeah.

For more of us.

Yeah, so I think AI and technology in, as we have more sensors around in us, so I think it’s not
just AI on its own, but I think about sort of sensed environments with Internet of Things devices,
and I think absolutely AI will help make some of those things better and easier.
So, this is a question that’s along those lines, but we’re looking for, either an existing example or
conceptual system you can imagine, can you give me a specific example of an AI system that
could or does empower an individual user or a group of humans?

The kind of system that, that envisage in this sort of democratization is an intelligent agent that’s
acting for us and on our behalf, so, you know it’s been called various things throughout time,
sort of the personal digital assistant is, or digital butler, those sorts of things, where actually, the
AI system knows, has an understanding of what our preferences are, what things we like, and
what we don’t like about particular things, and can bring those together by, you know, it learns
over time in a way that a good assistant or someone who you work closely with, you understand
that relationship over time and then it’s able to do things for you, so I look forward to being able
to delegate more tasks to a computer and an agent that’s acting on my behalf and I think that
will save us an awful lot of time in terms of having to organize things and get particular things
done. I think computers and AI and agents will be really good for that.

So, I’m going to ask an, an odd question because technology’s been really good at making
humans more productive but causing us sometimes to spend more time on things.

Yeah.

But this is an example where the hope is actually that AI saves us time. I’m wondering what the
discrimination there is, why, what that?

I think it would be interesting to figure out what we do with the things that, the time, that the
automated AI’s helping us with. So, sort of, if you look at sort of various jobs that are being
created, of course computers and digitalization has made things very much easier, and faster,
but we haven’t reduced our, I don’t think many people reduced their working week as a
consequence of any of those things. They’ve just taken on more activity, and I think it would be
interesting to see if that’s what happens with this as well, so I see the sort of AI absolutely
helping us do some tasks, but I don’t necessarily think that that will just free us, give us more
free time, I think we’ll just do other things. And, you know, if you think about the Rescue Global
example, actually what it did is it displaced their activity, so, you know, someone would spend
quite a lot of time working out quite a detailed plan and that would take several hours in some
cases, and a machine does it really quickly. But they didn’t work two hours less, they go and do
something else with their two hours. And I fear or suspect that that’s what will happen with a lot
of AI, so I think it will do some things, and I think we’ll in general, we’re quite good, human
beings are quite good or bad, depends on your perspective here at filling their time with other
stuff.

There’s books about, you know, the idea of the leisure age.

Yeah.

That say actually AI will do our work for us and we’ll end up reading poetry.

I’ve seen those books, I’m just a bit skeptical whether actually that will come to pass.

So, you described an example of an AI system empowering an individual through your work. I’m
curious, can you give us an example how an AI system might undermine human decision-
making power?
I think it’s about how these things are brought in to applications and particular jobs, so, if they’re
brought in in an inappropriate way where, as I say the, the human is in some sense subservient
or the human is just doing the bits that the AI can’t do for itself then I think that’s a bad thing. I
think we will undoubtedly see examples where AI and digitalization and automation is brought in
in an inappropriate way and I think it goes back to that dignity piece of wanting to make sure
that actually it works for people when we do these things.

So, it sounds like undermining is a danger if we have the wrong sort of power relationship
between the AI and the humans that use it.

Yeah, I think we need to get that right. I think we need to make sure that sort of AI is, as far as
any technology can be is good for those that are having to use it and sort of it doesn’t
downgrade what people do, in interacting with it, it upgrades and uplifts and supports, rather
than sort of leaves them, leaves more just the menial tasks that require a human to do.

Sounds like almost a design challenge.

I think it’s important to get right. I think it’s easy to get it wrong if you don’t think hard enough
about these things, and, which means that I’m sure in some cases it will get wrong and there will
be, you know, there will be examples of people who feel, they just do what a computer tells
them, and I say I think that’s inappropriate and bad.

Thank you. So, the next category of questions is actually about the idea of autonomy. And, so
this gets a little philosophical. But the first question is what do you perceive as valuable in the
idea of machine autonomy?

I’ve always worked with the notion of autonomous agents, and so sort of, in a multi-agent
system, that I’ve worked in, the idea is that the agents represent different individuals or different
stakeholders. And so, by definition, it’s not a single computer system with some overarching
structural organization in charge of it. So, for me, sort of the autonomy is a natural bit of the
problem cause it might be me interacting with you or it might be Company X interacting with
Company Y, so autonomy is what you end up with as your default position for these sorts of
systems. And so I think that’s a natural representation of most problems. I think most problems
are examples of multiple agent systems. I think a system where there’s just one actor is an
endpoint rather than sort of the, is the special case rather than the general case. So, I think,
from a more philosophical point of view, I think it works in the way that I think that people, that
we should interact with one another, so I want to be able to give an agent or a person sort of a
particular task to do if that’s what we agreed, and I’m not overly interested in the exact details of
how they’re going to go about achieving that. I think that’s up to the discretion of individual
people who I might ask to do something for me or it might be [missed @ 2:07, 2nd video] of a
software agent that I’ve asked to do something for me.

It’s interesting, your prospect for autonomy coming to your multi-agent system research seems
to be fundamentally about relationships, about the idea that in autonomies so that you can have
a real relationship between different agents working together.

It’s a realistic model of, of the world that it is representing. So, as I say if I have an agent that’s
representing me and is going out to represent me then it acts autonomously with respect to the
other agents that it’s interacting with. I’d say it is innate in the structure of the modeling and
representation of the problem.
So, then on balance when you’re designing autonomies and machines, what are the potential
pitfalls? What do you have to watch out for?

I think the things you need to watch out for are the means by which things, tasks are achieved
are something that you’re broadly happy with. I mean you want to make sure that in general, if
you ask someone to do something for you, you have a sort of a broad envelope of expectations
as to what they might do to make that so. And I think sort of the, what we need to be careful of
when we build agents is they don’t just take extreme actions in order to achieve something that,
that you’d look at and think well, you know, it wasn’t worth all of that pain and effort and
destruction to get to that point, even though I did ask you, you know, I asked you, you achieved
the goal, but the way you achieved it is just so implausible and beyond the realms that it wasn’t
the right thing to do. So, I think having a degree of confidence on the types of activity that go
within it and the types of things that are, will be done as a consequence of giving an agent a
goal without actually having to, a long screwdriver in terms of individual actions.

So, there’s an interesting, important tension there between giving the machine autonomy but
constraining the domain in which it acts autonomously.

Yeah, I think it’s important that sort of we are able to give some broad parameters in how
machines should go about achieving their particular tasks. In the way that we do for humans.

Stewart Russell gave me an example where he said, “If I tell an autonomous machine to buy me
coffee, then,” I think he’s used this with world leaders too, “It might cross the street and kill all
the people to get to the coffee machine faster.”

Yeah.

And so.

Which would be unfortunate. And actually, if you may, if there are no ways to achieve the
coffee, to get the coffee without doing a whole lot of bad things, then it should just report back
and say actually I can’t do that. And I think that understanding of broad parameters of how
agents will achieve their goals and the limitations of how they should act are important that we
get right and I think sort of being able to build that into our computer systems is a real challenge
at the moment.

And it seems like I guess we need that to be baked in before we release these things into the
wild.

And I think sort of before we let autonomous vehicles drive around our roads, it’s important we
have confidence that they will do certain things and they won’t do certain other things. I think if
we’re talking about sort of AI recommending some movies for you then, you know, that’s a fine
thing to do because the range of possible bad things it can do is not very high in that regard. So,
I think it’s proportional and I think it depends on what we’re after.

So, to talk about the driving example because that’s interesting because that’s in the news and
harm has been done by these machines. How should we be thinking about responsibility,
ascription of responsibility when these autonomous systems that sometimes even learn and
evolve over time cause harm to the world?
So, I think autonomous vehicles are interesting in that sense. So, my take is that they will
massively make our roads safer. So, if you look at the number of people who are killed on our
roads all over the world, you know, it’s an extraordinary large number. I think autonomous
vehicles have tremendous potential to reduce that. I don’t think they will ever reduce it to zero. I
think there will be death on roads caused by autonomous vehicles and that will be, that’s very
unfortunate. So is the massive number that currently get killed on our road. So, I think we
shouldn’t expect perfection on these things because the world is a very imperfect place and
autonomous vehicles will sometimes make mistakes and will sometimes get it wrong and
sometimes it will be because it’s a circumstance they haven’t encountered and sometimes it will
just be, you know, one of those things that nothing could be done about it.

So, EU legislation has been considering the possibility of prison hood for robots. Legal prison
hood. Because this question of ascription of [missed @ 7:49 2nd video] of guilt or responsibility
for harm. What do you think of this idea, well if we give the autonomous car some form of legal
prison hood then it’s a, it’s a target for liability, so we have a place to put our liability insurance?

Yeah, I’m a bit skeptical about that sort of line of work, I think if you take autonomous vehicles, I
think that liability comes with the owner, comes with the manufacturer of the vehicles so, you
know, if it’s an Audi, makes, that kills someone, then you have a discussion and Audi’s the one
who is defending it. Assuming that it’s all been kept up to date and patched etc., etc. So,
assuming that the user, that the owner has not done anything inappropriate or sort of, you know,
hacked the code or whatever might have happened. You know, if it’s been kept under the
normal operating conditions, I think the car manufacturers in that example need to be the
responsible one. I don’t know what it would mean to take an autonomous robot entity to court. I
don’t know what sanctions it would, how you would operate that. I think it has to be with the
people who are bringing the service or the entity to the market.

We have one other question that we’ve been asking everybody which is about what’s often in
the news, which is the idea of artificial general intelligence. So, we’re curious what your
thoughts are about the development of that?

So, I’m skeptical that this will happen anytime soon. Or even ever. So, I think we see lots of
great examples of narrow artificial intelligence or being good at specific tasks. I don’t see any
evidence yet that we’re moving towards general intelligence, and I think going from the specific
of being good at a narrowly defined task that’s clear and has clearly observable parameters to
functioning in a general artificial intelligence way is decades away.

So, you see, you know, deep neural nets that do things that we thought were decades away,
like Go.

Go is still a narrowly defined task, so, it’s an amazingly complicated narrowly defined task, but
you know, that’s, it’s good at one thing, you know, Go is a game, it’s got a clear reward function,
you know when you’re doing well. You know the rules of Go, and you know how things are
arranged. It’s a very ordered environment. It’s very different to sort of driving autonomous
vehicles on roads which is a very much more unconstrained environment. So, we do see
amazing breakthroughs, but we see breakthroughs on narrow tasks.

So, I’ll go back to the driving one, one last time too because I’m curious. We talked about this
idea that you want this car to do the right thing, to save lives, to save us on statistics, but you
also want to be able to constrain it to some degree, so it acts reasonably in under all conditions.
But the conditions on a road are kind of social right, you’re dealing with pedestrians and children
and dogs. How hard is that to make a car be reasonable in a social sense? Because that’s not a
you know a control feedback loop, that’s something much more complex.

It is a major challenge, so getting any complex bit of software that’s interacting with an
unpredictable world to know what it will do and what it won’t do is a significant challenge. Then
you add learning and adaptation on top of that. Then, you know, that’s a whole extra layer of
complexity and so we really do need basic research to methods to be able to develop and
specify and build these sorts of systems. I’m not convinced that we have all the requisite tools in
our toolbox to be able to do that at the moment.

Sounds like you believe we’ve at the beginning of the stages of figuring this out and kind of
deploying the systems.

Yeah, absolutely, so, there are sort of relatively simple tasks that can be automated that we can
use AI for today, I think sort of that much more general, in the wild for bigger tasks is something
that’s a decent way away and we don’t know how to do some of that software engineering, how
to build computer systems that actually will behave as we would like them to in most
circumstances.

Well thank you for all your thoughts. Is there anything you’d like to talk about that we haven’t
covered about AI and its impact on society?

No, the main thing I usually talk about when I talk these things is that partnership of humans and
AI. So, I’m happy. Your questions were good enough to draw out what I wanted to say in that
regard.

Excellent. Well thank you very much.

That’s okay. It was a pleasure.

Potrebbero piacerti anche