Sei sulla pagina 1di 20

Managerial Ethics – Take Home Exam #1

New Frontiers in Artifical Intelligence


Index

Why Artificial Intelligence?……………….…………….1


AOL1 – The Eleven System Laws……………....….…...5
AOL2 – Two New Laws……….…………….…….…..10
AOL3 – Ten Archetypes………………………..………11
Synthesizing of System Laws…………………..………18
Synthesizing of Archetypes…………………………….19

Page | 2
1. Read, update, chronologize and synthesize all relevant materials from Google, Newspapers,
Business Magazines and Social Media on the market phenomenon chosen by your group; document
and reference your data-updates by source, date, page, and the like. Explain why you chose this
current market phenomenon for your analysis. Define also your “unit of analysis” of the market
phenomenon: for instance, with respect to the GST Bill, is it the whole GST Bill, or any specific
component (e.g., Central versus State, the legislative process versus its planned enforcement,
specific action or actor, and so on) that you choose to investigate. Why did you choose this unit of
analysis? Justify your choice. [10 marks].

These days there is hardly any newspaper or magazine related or non-related to science and technology
that has not touched upon the topic of Artificial Intelligence. Be the idea of self-driving smart cars
(unmanned ground vehicles or autonomous vehicles) or carrying out surgeries, be it adjusting the home
temperature and lighting as per owner's mood, or finding if the suspect is innocent or not, or predicting
the market trend. AI has become the talk of the town and everything in this world is becoming automated.
There is in fact much more technology around us, in form of machines which are either helping humans
make the decision or are deciding on their behalf than an average person can take possible cognizance of.
For example, the online firms are able to do targeted marketing with the help of software that finds the
trends of consumer preferences and interests, based on data collected from the customer's past activities.

AI has already taken huge strides of advancements in mobile computing. Both public and private sector
investments in basic and applied R&D work on AI and have already reaped major benefits to the public
in fields such as health care, transportation, environment, justice, and economics to name a few. These
have been a reflection of Man's long drawn dream, since the advent of electronic computing systems, to
make computers/machines with human-like intelligence. The term "Artificial Intelligence" 1 has been
coined only in 1956 but has had resonated with many before. Alan Turing in his famous paper,
"Computing Machinery and Intelligence", 1950, wrote a question: "Can machines think?" he also
proposed a method for answering that question, and suggested a possibility that a machine might be
programmed to learn from experience, much as a young child does. Before him in 1942, Isaac Asimov, in
his short story "Run around" had mentioned about autonomous robots and defined three laws for the robot
to follow.2

 A robot may not injure a human being or, through inaction, allow a human being to come
to harm.
 A robot must obey the orders given it by human beings except where such orders would
conflict with the First Law.
 A robot must protect its own existence as long as such protection does not conflict with the
First or Second Laws.

AI development has had its share of fluctuations with the actual progress not been seen until the 1990s
when the AI was being applied to real-world problems like image recognition and medical diagnostics.

____________________________________________________________________________________
References:
[1] Warren S. McCulloch and Walter H. Pitts, "A Logical Calculus of the Ideas Immanent in Nervous
Activity," Bulletin of Mathematical Biophysics, 5:115-133, 1943.
[2] Asimov, Isaac (1950). "Runaround". I, Robot (hardcover) (The Isaac Asimov Collection ed.).

Page | 3
IBM developed a chess-playing robot named "Deep Blue" in 1997 that defeated Grand Master Garry
Kasparov. Thereafter there was no looking back, with improved learning and dedicated teams developing
complex algorithms, and improved hardware to support the same, led to the development of systems such
as Siri, helping and replacing market pundits, and technologies such as face and retina recognition have
become a reality.

"Machine learning is a core, transformative way by which we're rethinking how we're doing everything.
We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we're in
early days, but you will see us — in a systematic way — apply machine learning in all these areas." said,
Sundar Pichai, CEO, Google.3, reemphasizing the fact that future is nothing but AI driven.

The Laws by Asimov may be seen as a subtle attempt to draw a baseline for ethics for the new
technology, i.e. the AI enabled machines/robots, but they are exactly what we would need in the future to
define the ethical boundaries of these machines. The scope of AI needs to be well-defined and structured
in order to develop a technology that doesn't lead to the world that isn't meant to be. AI should be
essentially based on the humanity. The traits of humanity, though infinite, need to be categorized broadly
into a few categories such as, Good and Bad, Logical and Illogical, Cultural and Wild, and the lists go on.
The AI should not only be equivalent to the human mind but also, should comply with certain social
norms, behavioral, traditional values, professional codes, laws etc. These laws need to be woven into the
internal structure of AI much before the enhancements are being made on the front. The entire AI
community should come together to be clear on what we envisage for our future and what we are
planning to achieve through AI. The ideal AI should be an amalgamation and manifestation of various
positive and desirable aspects of the human nature with altruistic, philanthropic, commonsensical, and
directed towards a greater good of society on the whole. The efforts should be directed towards finding
the cure of incurable diseases, elimination of poverty, corruption, unemployment, armed conflicts,
medical devices etc.

Another ethical challenge that is faced by the AI community is in the data collection in order to make a
system defined above. In order to define human nature, lots of simulations need to be done in order to
code the best outcomes of humans reacting to the external situations - the data is not only cumbersome to
collect but also is easy to be tampered with, and the creator may knowingly or unknowingly create a
destructor instead of a facilitator
.
The future of the world would depend on the decisions made today by the people keen on developing the
technology that may evolve faster than the humans and it may make it difficult or even impossible for a
human mind to comprehend the activities of their own creation, as they may be beyond the realm of
human capacity. Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year
that was meant to learn how to negotiate by mimicking human trading and bartering.

But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each
other, they started to learn their own bizarre form of communication. Eventually, Facebook had to shut
down the system.4

_____________________________________________________________________________________
References:
[3] Steven Levy, "How Google is Remaking Itself as a Machine Learning First Company," Backchannel,
June 22, 2016, https://backchannel.com/how-google-is-remaking-itself-as-a-machine-learning-first-
company-ada63defcb70.
[4] Matthew Field. "Facebook shuts down robots after they invent their own language" The Telegraph, 1st
August 2017, http://www.telegraph.co.uk/technology/2017/08/01/facebook-shuts-robots-invent-language/

Page | 4
The System of AI may be broadly divided into three basic units:
1. The input mechanism, it is the chain of actions that are required for the start of the process. For
example, the team of engineers and scientists working together to define the algorithms and codes that
need go into the system. They would define the baselines of the project and the scope of the system. They
would choose the data sources (which may infringe the privacy of the people/sources chosen) and will be
in a position to alter (for good or bad) and work on (improve or deteriorate) the same.
2. The AI system when build will have its own learning process which will be based on the coding of the
system which will be enhanced further with computations and iterations and continuous learning process
which the system will pick up from the environment. While the basics are being defined the learning and
application would be depending on the new system, which may or may not learn exactly like a human and
be able to comprehend and react in the best possible manner. This stage would see further ethical
dilemmas which would be quite challenging and difficult to absolve.
3. Will be usage stage of the AI system. In this stage, the implementor would be in a dilemma whether or
not to replace a human worker with a machine. While evaluating the manager would be facing the
dilemma of deployment of AI What all places the machine are to be deployed. The machines due to their
varied and diverse configuration would have the potential to replace the humans almost everywhere. This
part would be considering the decision-making process and man-machine about these decisions and the
customer.

The Technology and Industry, currently in forming stage (1&2), would require more attention and thus
have been chosen for further discussion as the units of analysis. These stages would help build the
foundation of futuristic systems.

2. Apply the Eleven Laws of Systems Thinking (see Chapter 2) as Assurance of Learning (AOL1)
that studies any market phenomenon. Explain each Law and its potential for explaining and
predicting the behavior of the phenomena you have chosen for investigation. Illustrate the
application of each Law by past, current or projected examples. [30 marks]

Law 1: "Today's Problems come from Yesterday's Solutions."

AI may be the best example of the above Law and may manifest as the ultimate "Problem". Everyone has
been speculative of the effects of computers becoming more powerful and intelligent than humans. There
are forecasts which state that an AI could be tasked or may autonomously take up the task of developing
higher intelligence and a very new level may lead to Intelligence Explosion or "Technological
Singularity" as it is called where machines grow at an exponential rate to level of intelligence which may
lead to adverse changes in human civilization.
Looking at the present scenario, the AI being used at various stages of life are susceptible to attacks and
many unethical organizations have used the same to keep bigger organizations as a hostage and have
extracted huge ransoms.
There is also a continuous threat to Privacy of an individual who without the adequate knowledge may
become a target of unethical actions of the ones with the power to control AI.
But, if everything is carefully monitored there is a future of fathomless good awaiting humankind.

Law 2: "Harder you push; harder the system pushes back."


The AI is being advertised as the solution to many human/ human resource problems like better, healthier,
longer life. Which would allow humans to work less and earn more? With a relaxed lifestyle, humans
may become more obese and lazy and thus unhealthier. The population which is already increasing may
reach alarming levels and there might not be sufficient jobs for all as they will be then taken up by the

Page | 5
machines. Thus, leading the humanity to buckle and get stressed further. The people will come back and
question the basics and seek rationalizations of such actions.
The hope is that creators of the future will envisage the scenarios well in time to prevent any such "push"
to hit humanity for the worse.
Also, when humans are replaced by the machines there is definitely going to be a protest from the people
being replaced. Who will on large scale push these changes back and not allow them to be implemented.

Law 3: "Behavior grows better before it grows worse."

The AI in future will be directed to eliminate human intervention in tasks that may get affected by human
intervention like, driving cars, calculating complex equations, carrying out boring and repetitive jobs etc.
These will initially help humans achieve the targets as selling few more products via advertising,
predicting market trends, find solutions to diseases etc. but in long run the ever-growing knowledge of AI
systems may make them the master and the humans the servants, in the way that they would be dependent
on the machines for their needs and feel handicapped without it.
The symptoms can already be seen where humans have become addicted to the internet and mobile
devices which have occupied their mind and instead of saving time uses most of it.

Law 4: "The easy way out usually leads back in"

In the competitive environment of today's world wherein the technology gets obsolete every day it
becomes imperative for the technology to advance at a very high pace in order to meet the numbers for
growth, profits, market cap etc. In this rush, it is very likely that the people developing the AI systems
under the pressure of delivering take shortcuts and quick fix approach of handling things not carefully and
holistically evaluating the criteria of the impacts of these developments on the society in long run.
For example, the biggest threat today to AI is the availability of BIG data, Since the system requires a
large number of requirements to learn from the quantity and especially the quality becomes a bigger
challenge. Biased and unfiltered views, like racial profiling in the judiciary, may create a system that will
be partial to a particular race.
Thus, a lack of due diligence at this stage of AI development may lead to catastrophic damages in the
future. Although few governments are taking steps to build laws around the same, not be enough can be
said until the corporates and organizations recognize their roles and responsibilities and imbibe the same
with greater and whole truth before moving ahead.

Law 5: The cure can be worse than the disease

The interest with Artificial Intelligence within the military, security agencies, and major corporations are
growing. One of the implications of a potential intelligence explosion is the first mover advantage. The
magnitude of the first mover advantage also makes it likely that any emergent AI would be pushed
towards self-improvement. Powerful corporations like Google, Facebook, and Amazon have made AI
central to their business models. Never before has any field of innovation been positioned so close to the
competitive advantage between corporations or between nations. This might eventually lead to progress
towards Artificial general intelligence which is a machine that matches or even exceeds human-level
intelligence. A thinking machine would continue to enjoy all advantages that computers currently have
including the ability to calculate and access information at speeds that would be incomprehensible to
humans. Eventually, we would share the planet with a superior intellectual entity.
It is generally accepted by AI researchers that such a system would eventually be driven to direct its
intelligence inward. It would focus its efforts on improving its own design, rewriting its software or
perhaps using evolutionary techniques to optimize enhancements to its designs leading to an iterative

Page | 6
process of improvement. With each revision, the system would become smarter and more capable. If such
intelligence explosion were to occur, it would have huge implications for humanity. Even if we dismiss
existential risks associated with AI and assume that any future thinking machines will be friendly, there
will be the staggering impact on the job market and economy. In a world where affordable machines can
match and likely exceed the capabilities of even smartest humans, it becomes very difficult to imagine
who exactly would be left with any job. Rather than primarily being a treat to relatively routine, repetitive
or predictable tasks, machines would now be able to do nearly everything. That would mean majority
would not be able to derive income from work. Consumers would not have sufficient income to purchase
output created by smart machines thereby disrupting global economies.

Law 6: Faster is slower

Growth in the financial sector is also strongly correlated to advances in information technology and
exponential increase in computing power. With powerful computers, automated trading algorithms are
now responsible for nearly two-thirds of stock market trades. Wall Street firms have built huge computers
close to exchanges in order to gain trading advantages measured in a tiny fraction of a second. The
average time to execute a trade was just 0.0008 seconds in 2012 down from 10s in 2005. The flash crash
of May 2010 in which Dow Jones Industrial Average plunged nearly a thousand points brings forth
dangers of fast enhancements. The carnage, which took place at a speed never before witnessed, did not
last long. The market rapidly regained its composure and eventually closed 3% lower. Waddell and
Reed's mutual fund had used an automated algorithm trading strategy to sell contracts known as e-minis.
It was the largest change in the daily position of any investor so far that year and sparked selling by other
traders, including high-frequency traders.

Law 7: Cause and effect are not closely related in space and time

There is a lot of hype and hope on self-driving cars. They carry the promise of increased productivity,
safety and privacy yet the consequences have far-reaching effects beyond the automotive industry. Self-
driving cars will save lives and the cost related to traffic accidents. 1.25 million people die in traffic
accidents every year. Self-driving cars are estimated to reduce that number by 90%. If the average
commuting time of US citizens is utilized productivity gains could reach $507 billion annually.
Connected cars communicating with each other will reduce congestion and speed up travel times. Self-
driving cars are also better drivers, so huge fuel savings are expected. This is especially important as
sustainability is only going to become an even more important imperative moving forward. But
everything has a price, and one of the prices to be paid for self-driving vehicles is that everyone who
drives for a living could be out of a job. In the U.S. that could impact as many as 4 million drivers, and
about $148 billion in annual wages, with truck drivers representing half of that. The ripple effect
continues as you look at how adjacent industries will also be impacted by self-driving vehicles. The
automotive industry ecosystem which generates $2 trillion of annual revenue in the U.S will be impacted.
Fewer human truckers on the road mean fewer motel stays and rest stop visits, and cheaper trucking could
take business away from freight trains or even oil pipelines. Vehicles programmed to obey traffic laws
won’t need nearly as much policing, which also means fewer traffic tickets and less revenue for
municipalities.

Law 8: Small Changes can produce big Results – but the areas of higher Leverage are often the less
obvious.

Free internet courses like those offered by edX are part of the trend toward massive open online courses
or MOOCs. As MOOCs continue to evolve and improve, the hope that they will drive a global revolution
that will bring high-quality education to millions of world’s poor may be realized. Assuming that
potential employers see MOOCs as offering a valuable credential could eventually unleash a dramatic

Page | 7
disruption of entire higher education sector. New educational technologies are also emerging and will
increasingly be incorporated into MOOCs. Adaptive learning systems provide what amounts to a robotic
tutor. These systems closely follow the progress of individual students and offer personalized instructions
and assistance. They can adjust the pace of learning to match student’s capabilities. If these eventually
lead to a university degree, a university degree may become less expensive and more accessible to many
students and at the same time could devastate an industry that is itself a major employer for highly
educated workers.

Law 9: You can have your cake and eat it too – but not at once

It is believed that increase in automation is directly correlated to a loss of employment with a gain in
productivity. Within the manufacturing sector in developed countries, the introduction of sophisticated
labor-saving innovations is having a mixed impact on employment. Increased productivity due to
automation say at a textile company is driving increased employment at suppliers by means of truck
drivers driving raw materials. Alternate jobs are created where maintenance and programming of robots
are required. This trend also makes manufacturing companies more competitive with low-wage countries.
Indeed, there is now a significant reshoring trend underway in developed countries. This is driven by the
availability of new technology and rising offshore labor costs in developing countries. In April 2012, the
Boston Consulting Group surveyed American manufacturing executives and found nearly half of the
companies with sales exceeding 10billion$ were either actively pursuing or considering bringing factories
back to the United States. Factory reshoring dramatically decreases transportation costs and also locating
factories in close proximity to consumer markets and product design centers allow companies to cut
production lead times and be far more responsive to their customers. As automation becomes more
flexible and sophisticated it’s likely that manufacturers will trend towards offering more customizable
products.

Law 10: Dividing an Elephant in half does not produce two Elephants

As AI continues to grow we all must develop skills to differentiate ourselves to stay relevant. Based on an
estimate, AI could potentially automate most of the jobs which are sequential, computational, logical,
Analytical, fact or rule-based. While AI could automate some of the tasks, there are certain tasks which
AI cannot replicate. These are majorly creativity, empathy, holistic thinking, and visualization. This is the
area where humans can play an important role by complementing AI. In medical parlance, the left side of
the human brain is responsible for sequential and rule-based tasks while the right brain is responsible for
creativity, holistic thinking, and visualization. In simple terms, while AI can automate tasks performed by
the left brain, we all should make conscious effort to increase the productivity of right brain while
reinforcing our left brain capabilities. Adapting Design thinking methodology which not only embraces
empathy, holistic thinking, and visualization to problem-solving but also adds human dimension helps
find a desirable solution using human-centered approach. Any organization which embraces AI should
also imbibe design thinking in its DNA to stay relevant and competitive in AI world. It is very important
to step into the heart and shoes of the client for designing client-centric solution. Effort reduced from
automating regular tasks should be spent in enhancing customer experience.

Because of AI, we can now process huge amounts of customer data. We can make data work for us by
applying analytics and continuous refinement to anticipate/predict what the customer wants/needs and
personalize that interaction. Channels can be used in a way that’s conducive to intent driven engagement.
Under the hood, intent-driven engagement uses a variety of data and applies artificial intelligence
techniques to better understand the user, predict what the user wants to do with AI and resolve the issue,
then learn from that contact to continuously optimize future interactions.

Page | 8
Law 11: There is no blame

As AI becomes increasingly interwoven into our lives—fueling our experiences at home, work, and even
on the road—it is imperative that we question how and why our machines do what they do. Although
most AI operates in a "black box" in which its decision-making process is hidden—transparency in AI is
essential to building trust in our systems. But that transparency is not all we want: We also need to ensure
that AI decision-making is unbiased, in order to fully trust its abilities. When an AI system fails at its
assigned task, who takes the blame? The programmers? The end-users? Modern bureaucrats often take
refuge in established procedures that distribute responsibility so widely that no one person can be
identified to blame for the catastrophes that result. The provably disinterested judgment of an expert
system could turn out to be an even better refuge. Even if an AI system is designed with a user override,
one must consider the career incentive of a bureaucrat who will be personally blamed if the override goes
wrong, and who would much prefer to blame the AI for any difficult decision with a negative outcome.
Responsibility, transparency, auditability, incorruptibility, predictability, and a tendency to not make
innocent victims scream with helpless frustration: all criteria that apply to humans performing social
functions; all criteria that must be considered in an algorithm intended to replace human judgment of
social functions; all criteria that may not appear in a journal of machine learning considering how an
algorithm scales up to more computers. This list of criteria is by no means exhaustive, but it serves as a
small sample of what an increasingly computerized society should be thinking about. On the regulatory
side, development of rigorous safety standards and establishing safety certification processes will be
absolutely essential. But designing and operating a suitable framework of institutions and processes will
be tricky.AI expert input will be needed in establishing any framework because of the complexity of the
area and the general lack of understanding outside the AI R&D community. This also means that advisory
committees to legislatures and governments should be established as soon as possible. Acknowledging
that there are potentially massive benefits to AI, there will be an ongoing balancing act to create, update
and enforce standards and processes that maximise public welfare and safety without stifling innovation
or creating unnecessary compliance burdens. Any framework developed will also have to be flexible
enough to take account of both local considerations (the extent of own production versus import of AI
technology in each country) and global considerations (possible mutual recognition of safety standards
and certification between countries, the need to comply with any future international treaties or
conventions etc).So as we travel down the AI R&D path, we really need to start shaping the rules
surrounding AI, perhaps before it’s too late.

___________________________________________________________________________________
References:
1. The rise of the Robots-Martin Ford
2. https://hpmegatrends.com/the-butterfly-effect-of-self-driving-cars-b7e1af79b152
3. https://www.mycustomer.com/community/blogs/scott-horn/creating-holistic-experiences-with-ai
4. https://intelligence.org/files/EthicsofAI.pdf

Page | 9
3. As Assurance of Learning (AOL2) that studies any market phenomenon, create at least two Laws
of your own similar to the Eleven System Laws for explaining and predicting market behavior that
is not covered by the Eleven Laws. Explain each new Law and its potential for explaining and
predicting the behavior of the phenomena you have chosen for investigation. Illustrate the
application of each Law by past, current or projected examples. [10 marks]

Law 1: One often meets his destiny on the road he takes to avoid it

By being cautious of our actions and taking measures to safely put each step ahead in life, we often tend
to disregard the fact that our actions ultimately lead us to the destiny that we are destined for. Inevitably,
howsoever hard one tries to avoid treading the path towards avoiding any mishaps, he finds the most
important crossroads there.
Taking examples from the subject of Artificial Intelligence in discussion let us for an instance consider
(hypothetically) that the most cautious humans were to use technology to their best-known safe abilities
and voila! considered themselves completely untouched of any data breach. Alas! By the time they
realized this, their databases were already populated at some servers in some parts of the world. The
obvious question that follows is how? And the very obvious answer to the same how “not”? Starting from
our parking of vehicles at public spaces to our presence in office spaces, while we visit malls etc. we
often tend to disregard the fact that there are CCTV cameras surrounding us which keep capturing our
face zillion time. As a result, the same face detection that could be a safety feature for iPhones now
becomes a threat to greater security as more and more databases offering “face detection” security
become vulnerable to getting hacked.
What we saw in movies such as Star Trek of men conversing with computers is soon going to be a reality
in the near future. What becomes essential of us is that we ought not to avoid or get disturbed by the
threats of what AI does to us and eventually take a path avoiding it, rather, be able to find ways in which
we make inroads to sensibly pave our path through. Can the risks related to AI be completely eliminated?
The answer would be a NO. However, the best shot in providing the best safeguard would be to regulate
AI itself through the development of testing protocols for designing of AI algorithms, improved
cybersecurity protection and input validating standards. For ultimately, machines were made for men and
definitely NOT the vice versa.

Law 2: You have a hundred attempts to try, only one to succeed

In our attempt to try doing a task, we are faced with either successes or failures. Successes are laurels
while failures could teach us many things and frustrate the best minds at the same time. Our shot at
success is only one and with the specially endowed cognitive abilities of thinking and reasoning, we
ought to prepare ourselves to reason the way we look at tasks. Aligning our abilities to these tasks will
definitely help us achieve desired results. However, as we clinch that one shot at fame, let us be more
mindful of the fact as to WHAT is it that we intend to pursue. Are they in line with what we intend to
leave behind? Would they be of any worth for the generations to come? Our successes ought not to come
as threats to successes.
In light of the ongoing topic of AI in discussion, this path-breaking technology is gaining massive inroads
with time. From voice programs (Vocal IQ from Apple) to face recognition in smart phones, the pace
with which AI has made inroads in our lives is phenomenal. However, while we see these technological
breakthroughs flooding our way, the question we need to pose is, are we really putting in thoughtful

Page | 10
considerations before establishing them? Have we tested them enough NOT to be posing as threats to our
livelihood? Are we making our lives too vulnerable by introducing them? The rise of “evolutionary
intelligence agents”, that is “self-evolving computers” has been greatly envisaged in the near future.
However, the pace and number of breakthroughs with AI need to have a serious conversation with ethics,
legalities and real-life disruptions. Even the best of the AIs misjudged the pattern of yellow and black
alternating panels as school bus as per a wired article in 2015.
A systematic approach to address the questions stated above is to first address the question requiring to
build an AI and then observe its behavior in simplified settings. Posting appropriate questions with
techies and policy framers can see a more robust and responsible release of solutions pertaining to AI,
sustainable and beneficial for the greater populace. We might attempt at revolutionizing processes
through AI but ultimately what is justifiable and ethical is what needs to be embraced. Hence, the ultimate
shot to success with AI should be the best one!

4. Apply now the Ten Archetypes of Systems Thinking (see Chapter 2) as Assurance of Learning
(AOL3) that studies any market phenomenon. Explain each Archetype and its potential for
explaining and predicting, past or expected structures of market behavior of the same phenomenon
under investigation. Illustrate the application of each Archetype by past, current or projected
examples. [30 marks]

Archetype 1: Limits to Growth

The first seeds of Artificial Intelligence were sown in the 19th century, when Charles Babbage, a world-
famous scientist, envisioned the building of a thinking machine, capable of general computing. An
extension of his earlier Difference Engine – a device for automating mathematical calculations, the
Analytic Engine never saw the light of the day due to the refusal of funding by the British Association for
the Advancement of Science. A century later, British mathematician and code-breaker Alan Turing set the
pace for research on Artificial Intelligence through his seminal paper “Computing Machinery and
Intelligence”, in which he mused “I propose to consider the question ‘Can machines think?” In 1955, John
McCarthy coined the phrase “Artificial Intelligence” and by 1956, Artificial Intelligence was officially
recognized as an independent field of research at Dartmouth.
Right from the onset, the field was hyped and victory, elusive. The research program at Dartmouth
underwent three 10-year phases, each with its own agenda, and each fizzled out after a decade of work
owing to failure at attempting to apply the approaches to more realistic and complex scenarios. Phase
One, codenamed Cognitive Simulation, kicked off with Newell and Simon’s computer program “Logic
Theorist”, aimed at proving mathematical theorems, that was successfully able to solve 38 of 52 theorems
but suffered from lack of scalability. Likewise, in 1959, their follow-up program, the General Problem
Solver, met the same fate. During this era, the field, immensely hyped, was already showing signs of
sheer over-confidence and ignorance about the complexity involved to solve more challenging problems.
Early Machine Translation(MT) – so-called Fully Automatic High-Quality Translation(FAHQT)- aimed
at understanding human languages, met immediate success, but thereafter succumbed to a range of
unanswered problems, which continued to grow instead of shrinking, with increased efforts. Soon after,
the National Research Council, having poured $20 million into FAHQT’s research, killed its funding,
after a report by the Automatic Language Processing Advisor Committee, that stated that the computing
systems for MT were too slow and too expensive, and generally didn’t work well. Further efforts,
highlighted a key problem of automated systems – its inability to appreciate word meanings. Phase Two
research focused on feeding computers with knowledge till they understood enough about the world to
begin disambiguating the language. A symbolic approach to AI was adopted, with Minsky and Papert
developing methods for handling knowledge in isolated domains known as “micro-worlds”, which
provided insights that helped general programs to scale up to real-world thinking. The success in micro-
worlds could piece together a human-level general intelligence. While this system worked great, the
actual language was woefully inept of performing anything outside its limited domain. This led to Minsky

Page | 11
proposing his version of “scripts” – a set of expected actions, players and scenes to work with – that he
called “frames”. Frames were capable of capturing everyday knowledge and social events. Scripts, and
frames, didn’t work either, as the concept was based on learning from past actions in a social context,
with no way to determine the relevance and applicability of an action from one situation to another. By
the 1980s, the complexity of grasping what is relevant and ignoring what is not, in real-time thinking—
had further clouded endeavors to give computer knowledge in AI.
Today, Artificial Intelligence has come a long way, with major players such as Tesla, Amazon, Google,
and Apple using Big Data to mitigate some of the limits to growth such as Model Bias, Data Sparsity, and
Model Saturation. However, when frequentist assumptions aren’t valid, or data is sparse for a particular
task, Modern AI has little to say. But even to this day, the limits to growth in AI, arise due to difficulty in
language interpretation that requires common-sense knowledge. AI researcher, John Haugeland argued
that merely understanding words in a sentence didn’t amount to understanding the sentence itself. While
humans are able to process and comprehend such information, computers still lack such capabilities. For
instance, Google Translate frequently incorrectly translates sentences from one language to another. This
situational holism is even trickier for Modern AI and its embrace of data and shallow learning
mechanisms. The real limits to growth in Modern AI become even more apparent when one realizes the
difficulty machines face in inferring relevant information from datasets. For example, in a sentence such
as “Tom saw a cat in the window. He wanted it”, does “it” refer to the cat or the window? Modern AI
fails on such grounds. Every year, millions of dollars are poured into research on AI, but the very core of
the problem still seems to persist in spite of decades of research. Relevance is after all contextual, and
what is relevant in one situation may not be relevant in another. Until AI systems are able to decipher the
differences between the two and understand human emotions, intents, and actions, the limits to growth
will always persist. Unless addressed, this problem, known as the “Frame Problem”, will continue to stunt
the growth of AI, as, without its resolution, a plethora of challenges will surface ranging from ethical
dilemmas to unpredictable situational behavior. The Frame Problem thus plagues modern efforts even
more than older techniques—our success is, in a real sense, a beguiling yet inevitable illusion.

Archetype 2: Shifting the Burden

Since its advent, Artificial Intelligence has poised three major challenges to the technology world –
Model Bias, Data Sparsity and Model Saturation. Over the course of years, symptoms of these problems
have manifested themselves in various forms, leading to skewed or improper decisions. In the early
stages, the absence of developments in Big Data and Machine Learning rendered an AI system only as
good as the data that it was fed. This sparsity of data led to representative samples of data left
unaccounted for, leading to erroneous decisions and ignorance of contextual relevance. Millions of dollars
have been invested in trying to enhance data feeding mechanisms of AI systems, without appreciating the
fact, that the inherent problem of an AI was its inability to feel, think and infer like a human. While
gargantuan leaps have been made today to mitigate some of these issues, back in the day when machine
learning was unheard of, most of the symptoms of this problem would be attended to, by feeding the
system more data. It was evident that unless the machines could teach themselves, no matter how much
data was fed in, the problem of frequentist assumptions and skewed presumptions would always exist.
Shifting the burden from this core problem to feeding more data without any advancement in machine
learning was a major handicap in the bygone era. However, this problem persists even today, as is
exemplified by the classic “white guy problem”. For instance, Google’s photo app, which applies
automatic labels to pictures in online photo albums, classified images of black people as gorillas. Another
manifestation of this was during an AI-judged beauty contest, when the candidates who were awarded
turned out to be mostly white or when advertising algorithms preferred to show high paying job ads to
male visitors. While luminaries such as Elon Musk and Nick Bostrom caution the world against the
‘singularity’ of AI – when machines become smarter than humans – and invest millions of dollars in
trying to responsibly develop AI, they tend to look over far more sinister symptoms of problems arising
from sexism, racism and other forms of discrimination being built into the algorithms. Another instance

Page | 12
of this occurred when Nikon’s camera software misinterpreted images of Asian people as blinking and
when Hewlett-Packard’s web camera had difficulty in recognizing people with a dark complexion.
Indeed, predictive programs are only as good as the data they are trained on, and that data has a complex
and long history. Like most technologies, AI will reflect the values of its architects and hence inclusivity
matters – from who designs it to which ethical standpoints are included. Otherwise, there is a great risk of
constructing AI that has a myopic view of society stemming from traditional biases and stereotypes. It's
essential to address this problem to design fairer AI. But to bring about such a change requires greater
accountability from the minds behind it. Current consequences for less powerful strata of the society need
to be addressed. Today the loudest voices deliberating the potential threats of superintelligence are
affluent white men, but for those who already face marginalization, the threats are here.

Archetype 3: “Fixes that Backfire”

Artificially Intelligence has been continually evolving and to ensure that AI systems adapt to everyday
complexities, they need to be maintained and sense checks need to be performed. Firms spend massive
chunks of funding in R&D, with a hawk-eyed focus on leveraging technology to gain strategic advantage
over competitors. Yet, ever so often firms witness catastrophic failures when their plans go awry. In a
field as unpredictable as Artificial Intelligence, such failures are commonplace. Take for instance Tesla’s
struggle in carving out a name for itself in the electric automobile industry. According to Bloomberg, the
company had almost caved in, in 2008. After cutting down 24% of Tesla’s staff and investing his last $35
million in the venture, founder, Elon Musk slowly saw the resurgence of his company. But not everyone
is as fortunate or as fore-sighted as Elon Musk. Every year, hundreds of start-ups and companies fold up
on account of the high costs associated with maintaining AI systems. When the resources of a venture are
strained, even investors start losing their interest in funding them, and this kicks off a vicious cycle.
Increased borrowing and debt accumulation become a tangible threat when the venture flounders on
account of it being unable to compete in the AI market, a market that it had entered with high aspirations.
Eventually, the borrowing comes back to bite them.
There is, however, a more sinister connotation to the rapid growth of AI. Every day, hundreds of fixes are
deployed across AI systems all over the world, to increase their efficiency and productivity. The systems
are being perfected and polished, to levels of scary human-like accuracy. And yet with all this, one cannot
help wonder what will happen the day a perfect AI system was developed - a system that, after repeated
fixes, tweaks and embellishments could replace humans in computing, reasoning, cognitive and emotional
intelligence. Indeed, it is easy to see the ethical, economic, privacy and legal ramifications should such a
day come.

Archetype 4: “Tragedy of the Commons”

“Ruin is the destination toward which all men rush, each pursuing his own best interest in a society
that believes in the freedom of the commons” [Garrett Hardin, 1968].
Suppose several autonomous underwater vehicles (AUVs) are carrying out data collection activities in a
particular region. Each AUV needs to know the location of foreign objects in its proximity, as well as the
position of other AUVs, via its sonar; assume that the sonars all operate at around the same frequency.
Sonar transmits acoustic pulses into the water column, which is the ‘commons’ in this scenario; the
commons is “exhausted” by the addition of acoustic noise that interferes with sonar. Now to increase the
efficiency of operation, an AUV is likely to increase its sonar pulse, since doing so would help in getting
more reliable information of its environment. But, by doing so, it adversely affects its and other agents’
sonar data quality. The benefits of increased information collection by the AUV will outweigh the shared
drawback of an increased noise level shared by all AUVs, thus leading these rational agents to step up
their sonar output, either in terms of sonar frequency or by the amount of energy transmitted per pulse to

Page | 13
break through the noise. This can eventually lead to the “collapse” of the commons when the noise is
great enough to render other AUVs’ sonars useless.

It can be reasoned that distributed AI systems(DAIs) comprising of rational agents will encounter the
tragedy of commons just as a group of humans will, regardless of whether they are cooperative,
antagonistic or neutral. A few of these examples where shared resources are at risk include shared
processing capacity, physical resources used for construction, power obtained from a slowly-recharging
fuel point, information from an external sensor, memory, disk space, power, physical space and materials
and communication channel bandwidth. The AI researchers know that the concept of “invisible hand”
doesn’t necessarily hold for Artificial Intelligence systems as systems acting autonomously may not
necessarily strive for the common good. Indeed, systems with their own motives to succeed will select
actions that further those goals, no matter how detrimental they may be to the mission. To avoid this,
there needs to be a compromise between goal achievement and health of the resource. Unless the
designers of the AI systems go an extra mile to preserve the commons, then the rational agents will end
up depleting the very commons that are critical to the success of their missions.

Fig 1. Examples of resources which may be shared by DAI agents.

Archetype 5: “Accidental Adversaries”

The unpredictability of AI systems makes it difficult to fathom the range of conflicts that can arise due to
superintelligent systems functioning autonomously. A classic example of this was mentioned under
Archetype 4, wherein AUVs, in their pursuit to achieve a common goal, started conflicting against each
other by jamming the water column with sonar energy pulses, thereby leading to a collapse of the system.
Further, imagine a situation in which two AI bots are tasked with maintenance of a system. One bot is
responsible for ensuring that there are no redundancies in the system and is primarily responsible for
removing obsolete data and duplicate entries. Another bot is responsible for ensuring that critical system
data is backed up and stored in the system. If the two bots operate in silos, without clear parameters being

Page | 14
set for what constitutes ‘critical’ data, then the two bots will end up in an accidental conflict, with each
bot trying to perform its best. While one bot will try to delete unnecessary data, the other bot might try to
back up that data, deeming it critical. In such a situation, the two bots essentially start competing with
each other. As any system has limited resources such as memory, processing power and threads, this also
puts a strain on the resources. Without clear definitions of what is critical and what isn’t the two bots end
up being accidental adversaries.
With the rise of superintelligent machines, it can be expected that while each AI will try to do its best,
without the sense of judgement and proper design, it may just work to achieve its own goal, without
paying heed to the operations carried out by other AI.

Archetype 6: “Success to the Successful”

Artificial Intelligence, while being, the brainchild of humans, operates by learning from recursive
functions and trends in data. While the issue of model bias has been mitigated to a large extent by recent
breakthroughs in technology and data collection mechanisms, it still remains a nagging problem with AI.
Due to this, depending on the weightedness of data, AI systems decide accordingly. Stronger and more
repetitive data affects the AI’s learning mechanisms more than would sparse and weak data. As a result,
many a time, AI systems suggest outcomes and decisions that draw upon the more pronounced and
recursive trends in its data while neglecting the weaker trends in data. This success to the successful trend
results in over-representation of one set of data while ignoring the other weaker sets even if both the sets
of data were equally competent at the beginning of the learning curve. In recent times, there have been
numerous incidents when artificial intelligence systems succumbed to model bias. For instance, the
company Northpointe built an AI system to predict the chances for an alleged offender to commit a crime
again. The algorithm was criticised for being racially biased as it predicted that black offenders were
more likely to commit crimes again. Further, an AI system was used to select winners in a beauty contest.
Failure to provide the AI with a diverse training set resulted in all the selected winners to be whites. With
the increase of skewed data fed into the AI systems, the resulting outcomes suffer from lack of balanced
representation.

Archetype 7: “Balancing Process with Delay”

Today’s AI systems are nowhere near perfect or ideal and an aggressive implementation of a partly
incomplete system can affect the society very adversely. For example, an AI based computer program to
identify criminals who would go on to commit more crimes in the future made more false positives in the
case of African-American inmates [3]. A subsequent study noted the chances of an African-American
inmate being pegged a future risk was 77% more than his Caucasian counterparts. This sort of bias has
been informally tagged as Machine Bias and it is evident that much deliberation and forethought.
The AI algorithms and machines have to be perfected before they can be integrated into all social and
business processes. By delaying the perfection of these systems to a later date, we risk many such false
positives (as in the future-crime prediction AI) that can adversely impact the society. As the archetype
suggests, making the system more responsive is the need of the hour. This is not to suggest that AI
investments and implementation should take the backfoot until they have been perfected, instead, the
archetype suggests we should invest in improving the AI system capabilities quickly.

___________________________________________________________________________________

References:
1. https://thebestschools.org/magazine/limits-of-modern-ai/
2. https://dlc.dlib.indiana.edu/dlc/bitstream/handle/10535/1601/TR.pdf?sequence=1
3. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Page | 15
As per a 2016 survey by The Economist (figure shown below), 44% executives believe a delay in AI
implementation will make their businesses vulnerable to startups and other challengers.

Archetype 8: “Growth and Underinvestment”

As per a research by Accenture Research and Frontier Economics, it is estimated that the overall
productivity across board will increase by over 40% by 2035. This clearly shows the positive benefits of
investing in the AI development and implementation. This archetype connects archetypes 1 and 2 of
‘Limits to Growth’ and ‘Shifting the burden’. On one hand, there are obvious limits of growth to the
immediate AI development, however we can possibly overcome these limits by focusing more resources
in terms of research efforts and research investment.

A recent 2016 survey by The Economist estimates the presence of AI in a few industries; as is evident
from the graph, even the most well-integrated industries are currently in an exploratory phase. Most
industries and countries are still waiting for the major game changer with AI implementation.

By greater investment and implementation, the demand for better and more well integrated AI systems
increases. This, in turn, will improve the adoption and create scope for greater AI integration into the
society.

Page | 16
The above graph indicates the results of a study comparing the error rates of AI with that of humans for
certain tasks. As can be inferred from the graph, the rates have been coming down steadily over the recent
years ultimately over-performing humans in the study.

Archetype 9: “Escalation”

Most of the current AI research is being led by startups or research labs; many of the big companies have
also invested in the AI research space through acquisitions or separate subsidiary research offshoots. This
puts the startups and the big companies on either side of the AI spectrum.
The big company research efforts take place in isolated silos with very little shared knowledge. This leads
to frequent escalations and confrontations between industry titans on the subject of AI. Recently, Elon
Musk and Mark Zuckerberg engaged in a public debate on the impact of AI on the future of humans. Elon
Musk calls greater proactive regulations on AI and suggests uncontrolled development in the AI field
could be detrimental to mankind. Zuckerberg on the hand believes proper AI development can only
benefit human industry and society. These sorts of escalations divert attention from the core focus of
developing AI technology. Given that change is inevitable, the industry should focus its efforts on making
AI safer, better and more useful.

If the companies and brilliant minds working on AI would share their research efforts in a common pool
of knowledge, our pace of AI development would grow exponentially.

Archetype 10: “Eroding Goals”

In the race to finding a competitive advantage, many companies are making hasty implementations of AI
solutions. By underinvesting in terms of time, funding and research, the AI solutions that are deployed are
neither robust nor complete.

Page | 17
As per a study by the Machine Intelligence Research Institute, it could take at least 25-30 years before AI
solutions and agents can realistically outperform humans. Therefore, companies must be willing to invest
and continue research until those high standards of AI development are attained. If the organizations
dilute their standards by focusing on the shorter-term solutions and make a compromise, there may be
many adverse effects on the society as discussed in Archetype 7(many false positives in identifying future
criminals).

The bottom-line with respect to this archetype is that organizations should stay invested in developing AI
for the future and not ‘erode their goals’ by compromising on standards.

5. Synthesize all the Laws under questions 2 and 3, compare and contrast them, and judge which
Law has the best explanatory and predicting power of the phenomenon under investigation, and
why? Illustrate by market examples. [10 marks].
The eleven laws discussed in the document broadly enlist various behavioral and technological aspects of
AI. As we take a sneak peek into the future as humans, we can picture ourselves as preparing for times
which may or may not be under our complete control. What we perceive as something making our lives
simpler today, was a part of a “eureka!” moment for us yesterday. Having gotten used to it and
confirming ourselves to its standards slowly and gradually makes us realize how grossly dependent we
become to the boon at hand. Relying more and more on data hand and replacing the activities, of which
we humans are an intrinsic part, with AI bot’s systems and processes would make the situation at hand for
us humans very close to depression and a sense of no achievement anywhere. Some of such behavioral
aspects get discussed starting from Law 1 through Law 4.

The remaining laws talk about a more technological interface with Artificial Intelligence. A world of
technology explosion is what we are seeing ourselves getting pushed into. So much so, that the AI and
systems intelligence that we tend to design could end up into a technology explosion and inevitably out of
the hands of the designers and framers themselves. The speed with which we see this technology
exploding is phenomenal. The human interface, however, needs to remain sacrosanct and most relevant to
all these machine interfaces. Quite often we often neglect aspects that are convenient and advantages on
one hand while eating away the very fundamentals of human well-being on the other. A more responsive
approach and a counter responsibility to AI are what is important and imperative for us. To stay relevant
as the very creators of technology, humans ought to strongly differentiate them in more than many ways.
A mind of inquisition is what is important which does not merely rely on the black box housing the
modem and hardware for AI but is also in a position to progressively question and redesign any fallacies.
Amongst all the laws enlist the one which has the best explanatory and predicting the power of the
phenomenon is: - “Today’s problems come from yesterday’s solutions”.

Although stating AI as a “problem” would be considered a misnomer in today’s age, the pace of which
this handle of technology is making inroads into our lives is alarming. Though the benefits are easy to
reckon and are tangible, what actually goes unnoticed from our eyes are the bigger details. Let us take a
most recent example to dwell upon this fact. The very recent launch of iPhone X was one of the most
awaited events of September 2017. And rightfully so, the makers did not disappoint the users. With face
detection features even more enhanced, the designers slated this technological breakthrough as something
that would strengthen the security of this device. Animojis were the newest of all the lot which could
react to the facial gesture of the user and converse over chats – all of this at an expense of 30000+ facial
detection points of the user which the device would read and the bottom line; use artificial intelligence at
the crux of it all, to interpret data. As we make ourselves so very much vulnerable and release our private
credentials into the hands of such databases, how do we draw the line of reliability? Equifax, a credit
reporting agency, announced last week that malicious hackers had leaked personal information of 143
million people in their system. Ideally, if a hacker intends to prey upon our personal data, we could be a

Page | 18
toast in probably less than an hour. Scientists have harnessed the power of artificial intelligence (AI) to
come up with a program that, combined with existing tools, figured more than a quarter of the passwords
from a set of more than 43 million LinkedIn profiles. All these and more could be alarming considering
the dependability that we humans have with technology and the enhancements associated with the same.
Our use should be judicious and must pave the rightful way for these truly revolutionary arms of
technology be a boon and no longer problems for tomorrow.

6. Synthesize all the Archetypes under questions 4, compare and contrast them, and judge which
Archetype has the best explanatory and predicting power of the phenomenon under investigation,
and why? Illustrate by market examples. [10 marks].

Corporate Strategies to ride the AI wave

Organizational and
AI Research – Innovations, cultural changes to
Market example to
Archetype game-changers that can transform AI into being
illustrate the archetype
transform AI sustainable and
competitively strong

The ascent of AI research


Limits to Invest not in growth, but in Focus on removing the from humble beginnings
1
Growth. expanding the scope of growth. limits to growth. in research of computation
& code-breaking.

Mistakes illustrated by
Shifting the Focus on long-term research Avoid short-term solutions racial insensitivities
2
Burden. goals. and quick fixes. shown Nikon Camera and
Google Photos AI.

Elon Musk’s & Tesla’s


Fixes that Avoid symptomatic fixes
3 Address long terms solutions initial struggle in the
Backfire that focus on the short term.
electric automobile space.

Innovate technology to
Avoid overusing resources
Tragedy of replenish the common Use of SONAR by
unnecessarily.
4 the resource. autonomous underwater
Commons Work on replenishing vehicles
Innovations to use new
common resources.
alternative resources

Two bots working to


Innovations to help AI
reduce redundancy and
Accidental understand the value of what Be cognizant of your
5 making backups data
Adversaries they do and how not to partner’s needs.
simultaneously, in
compete accidentally.
isolation

Success to False positives made by a


More investments in successful Fund crucial projects for
6 the system designed to
AI projects greater results and impact
Successful identify future criminals.

Page | 19
Invest in bettering AI
Balancing algorithms. False positives made by a
Innovations to reduce mistakes
7 Process with Implement with caution and system designed to
and false positives.
Delay be cognizant of possible identify future criminals.
system errors.

A survey by The
Growth and Be aware that any practical
Economist shows
8 Underinvest Invest in long-term growth. AI solution will take time to
implementations currently
ment develop and perfect.
exploratory in nature.

Avoid public competition to


make undue progress in AI.
Share and maintain a common Public debates between
knowledge pool. Realize that it is better to
Elon Musk and Mark
9 Escalation pool efforts.
Integrate research efforts to Zuckerberg on the impact
gain pace exponentially Take aggressively ‘nice’ or of AI.
‘peaceful’ steps to reduce
tensions.

Machine Intelligence
Avoid compromising on Research Institute report
Eroding
10 Invest in long term solutions. research goals and standards that shows AI will take at
Goals
for quick results. least 25 years to replace
humans.

Page | 20

Potrebbero piacerti anche