Sei sulla pagina 1di 8

Kassicieh 1

Mark Kassicieh
Government, Period 1
Mr. Rogers
26 October 2016
Mock Congress Research Paper
Over the last semi-century there has been many allusions to autonomous vehicles in our
culture, yet only recently has this bizarre concept become a tangible reality. While most people
think that self-driving vehicles will exponentially increase the safety of driving, many are
unaware of the numerous drawbacks to this futuristic form of transportation. Until all automobile
manufacturers selling vehicles in the United States perform extensive safety testing and pass
certain benchmarks, no self-driving technology should be released to the public. The SelfDriving Safety Act of 2017 should be passed into law due to the poor reliability of self-driving
technology, increased accidents by self-driving vehicles near human-operated vehicles, and due
to the lack of a code of conduct for self-driving vehicles to act on.
Self-driving technology is new and unproven. In order for the vehicles to operate on their
own, they have a multitude of sensors taking constant readings and interpreting them through a
powerful computer in the car. Little do many people know, these sensors are extremely sensitive
and can go offline due to the smallest changes in weather. Just as it can for a human, an
overabundance of white flurries can inhibit a driverless cars visibility. Autonomous cars use
various types of sensors to read roads including radar, cameras, and lidar, which uses light to
calculate surroundings. As Fortune previously reported, snow can cover up radar sensors and
cameras, and render lidar technology useless (www.Fortune.com). Imagine all the regions of the
United States that are faced with inclement weather. The amount of destruction that could be

Kassicieh 2
caused by faulty driving systems because of weather would be astounding. The systems that run
this self-driving technology can also become overloaded very quickly. The computer hands over
control of the vehicle to a human being. In that instant, the human must quickly rouse herself
from whatever else she might have been doing while the computer handled the car and focus her
attention on the road. As scientists now studying this moment have come to realize, the hand-off
is laden with risks (www.huffingtonpost.com). Even the slightest confusion will cause the car to
default back to regular human operation. Since it will take a few seconds for the human driver to
regain focus and control, this creates a very unsafe situation for the occupants of the self-driving
vehicle and everything surrounding it. Now the robot was solely dependent on its onboard
sensors. It soldiered on. But as it approached the sharp off-ramp toward Treasure Island,
Levandowskis autonomous car slowly smooshed into a concrete barrier. The car ground to a
halt, unable to move. (Tillemann 267). In this experiment performed in San Francisco, a
completely autonomous vehicle left to function based on its sensor readings failed miserably and
almost plummeted into the San Francisco Bay. Without tried and true technology being
implemented into these cars, our safety is at stake, especially since an uncontrollable moving
vehicle is a large hazard to everything around it.
Many technology and media outlets paint self-driving vehicles to be safe modes of
transportation that will allow our roads to have virtually zero accidents. What people don't
understand is that every single vehicle on the road would have to be autonomous for this
statement to be true. Until that point, the mix of cars on the road will be heavily populated with
human operated vehicles. Google has pioneered a great deal of technology for the autonomous
vehicle, but the current technology is far from perfect. There are two main problems with this
technology. The first is from the recognition of the data from the camera, where the conversion

Kassicieh 3
of the visual data input to numbers output has its own issues. For instance, the camera can
recognize objects from a far off distance, but it has trouble differentiating between two similar
objects (Cyrus Pinto). Since self-driving vehicles cannot predict what move the human operated
vehicles around them will perform, they are forced to rely on their sensors to make decisions.
These sensors are a very unreliable source and clearly the weak link of self-driving vehicles.
Human drivers are also faced with a safety issue when having to drive next to self-driving
vehicles due to their unpredictable movements. When he pays attention, a human driver has
tremendous capacity for reacting responsibly to circumstances for which he has not been
explicitly trained. With the human out of the loop, the autonomous car is far less capable of
handling unforeseen circumstances. By definition, an unstructured environment such as a realworld road network includes plenty of unforeseen conditions. This lack of predictive capability
demands new verification techniques to allow us to justify trusting self-driving cars in our
everyday lives (Milos Mladenovic). Human drivers are used to driving in real world situations
that could have unpredictable outcomes. The robotic systems in autonomous cars have very little
experience dealing with unforeseen changes in road conditions, making them very unsafe for
real-world conditions. The current mobile robots do relatively little that is recognizable as
intelligent thinking, this is because: [The robot's] perception does not meet the necessary
standards and much of the intelligence is tied up in task specific behavior and has more to do
with particular devices and missions than with the mobile robots in general. Much of the
challenge of the mobile robots requires intelligence at subconscious level. The motion of mobile
robots in an unknown environment where there are stationary unknown obstacles requires the
existence of algorithms that are able to solve the path and motion planning problem of these
robots so that collisions are avoided (Hachaur). If autonomous vehicles had programming to

Kassicieh 4
deal with foreign environments with changing conditions, they could possibly be trusted. As this
source states, though, no such algorithm exists or will ever exist due to the infinite number of
situations that would have to be accounted for. This unpredictability is furthered due to a lack of
structured rules for these cars to follow.
Our roads have many rules and laws to abide by for safety reasons. If there were no defined
rules, it would be absolute havoc on our roads. Automotive executives and lawmakers sniped at
each other over whether universal standards were necessary for self-driving cars, with private
sector saying that standards would slow progress and legislators replying that theyd heard the
same objections over updated seatbelt standards in 1998 (www.theguardian.com). As of now,
self-driving vehicles have no defined rules to operate by during real life situations since
legislators cant decide whether or not it is necessary to implement a code of conduct.. Federal
regulators announced their first safety checklist ever for semiautonomous and driverless cars this
week.We broke down the 15 points: Data sharing, Privacy, System Safety, Digital Security,
Human-Machine Interface, Crashworthiness, Consumer Education, Certification, Post-Crash
Behavior, Laws And Practices, Ethical Considerations, Operational Design, Detection and
Response, Fallback, and Validation (www.nytimes.com). The government is currently putting
together a safety benchmark assessment that encompasses what to do in different situations, but
until that is released and administered all self-driving cars currently on the road are making
decisions on the fly with little knowledge. What if a child runs into the street suddenly? Does the
car plow into it or drive off an overpass to avoid it? With safety being a large concern when it
comes to transport, there is a lot of disputes happening over this topic. Blame is also a large and
controversial issue with autonomous vehicles. Our current legal system assumes that the person
in the drivers seat is in control of the vehicle, which is not necessarily the case with autonomous

Kassicieh 5
vehicles. If drivers roles are reduced with the creation of a limited-driver or no-driver input
system, the criminal liability regime will have to significantly change in order to accommodate
the new technology (Frank Douma). Someone has to be at fault when an accident occurs, and as
of now there is no structured way to determine blame in a case with an autonomous vehicle. As
accidents with these vehicles start piling up, someone needs to be held accountable for the
vehicles actions.
Many would say that self-driving cars are safe enough already and will cause traffic accidents
to reduce to almost none. Little do these people know, there is infinite room for improvement and
a definite need for enforced safety measures. [Driverless cars have trouble with] making sense
out of confusing and ambiguous situations. Think of parking lots, frontage roads, and toll plazas.
Here, the lines disappear and judgment takes over. Think of a police officer signaling traffic.
Only humans can figure out what to do when the rules are unclear, provisional, or absent.
(www.forbes.com). There are still many situations where human intelligence trumps computer
systems. This is especially important since impromptu changes to road structures happen all the
time, especially in large cities. As previously stated, self-driving vehicles could drastically
improve road safety if every car on the road was autonomous and communicating on a singular
network. For many reasons that day is very, very far off in the future. This is especially due to the
economics of the situation: Not everyone can afford to buy a new car with this technology and
the trickle down of these cars would take very long. Until then, self-driving cars sharing the
roads with human drivers is a recipe for disaster. They obey the law all the time, as in, without
exception. This may sound like the right way to program a robot to drive a car, but good luck
trying to merge onto a chaotic, jam-packed highway with traffic flying along well above the
speed limit. It tends not to work out well (www.bloomberg.com). There is a multitude of drivers

Kassicieh 6
that routinely break traffic laws everyday. In most places it is a commonplace to drive slightly
over the speed limit. Since driverless cars will be forced to obey all laws on the dot, there will be
many confrontations and possible accidents on the roads. In dynamic environments, safety
needs to be guaranteed without restricting robot system autonomy and flexibility. Therefore, not
many principles from industrial robots safety standards are applicable. Robots deployed in
dynamic environments rely heavily on their sensory system, which provides them with
perception of the environment around them (Mimi Sheller). Our roads are always changing in
conditions, whether it be due to construction, weather, or even accidents. Because there is never
a definite scenario for an autonomous vehicle to anticipate, they must rely purely on their
unreliable sensors in order to navigate dynamic areas. In a world where other human drivers are
very unpredictable and reckless, this could be a recipe for disaster.
Considering the poor reliability of self-driving technology, increased accidents by self-driving
vehicles near human-operated vehicles, and lack of a code of conduct for self-driving vehicles to
act on, the Self-Driving Safety Act of 2017 should be passed into law. This bill isnt being
pushed to make anyone money or make others lives harder, but instead it serves the bigger
picture. Everyday humans perform one of the most dangerous tasks they can statistically do: get
behind the wheel of a heavy hunk of metal and maneuver their way to a destination. Now if we
were to entrust this job to computers that don't even have the hardware or safety track-record
capable of reliably performing this operation, we are going to have mass destruction on our
hands. For this reason, a set safety benchmark must be reached by any auto manufacturer
attempting to sell a car that can operate autonomously in the United States.

Kassicieh 7

Works Cited
Addady, Michael. "Self-Driving Cars Hit a Roadblock in the Snow." Fortune. Fortune, 09 Feb.
2016. Web. 06 Oct. 2016.
Bosker, Bianca. "No One Understands The Scariest, Most Dangerous Part Of A Self-Driving
Car: Us." The Huffington Post. TheHuffingtonPost.com, 16 Sept. 2013. Web. 08 Sept.
2016.
Brauer, Karl. "Top 10 Autonomous Car Facts: When Will Self-Driving Cars Arrive, What's
Holding Them Up?" Forbes. Forbes Magazine, 02 May 2016. Web. 06 Oct. 2016.

Kassicieh 8
Douma, Frank, and Sarah Aue Palodichuk. "Criminal liability issues created by autonomous
vehicles." Santa Clara L. Rev. 52 (2012): 1157.
Hachour, O. "Path planning of Autonomous Mobile robot." International journal of systems
applications, engineering & development 2.4 (2008): 178-190.
Kang, Cecilia. "The 15-Point Federal Checklist for Self-Driving Cars - The ..." The New York
Times. The New York Times, 20 Sept. 2016. Web. 24 Oct. 2016.
Mladenovic, Milos N., and Tristram McPherson. "Engineering Social Justice into Traffic Control
for Self-Driving Vehicles?." Science and engineering ethics(2015): 1-19.
Pinto, Cyrus. "How autonomous vehicle policy in California and Nevada addresses technological
and non-technological liabilities." Intersect: The Stanford Journal of Science, Technology
and Society 5 (2012).
Sheller, Mimi, and John Urry. "The city and the car." International journal of urban and regional
research 24.4 (2000): 737-757.
Thielman, Sam. "'Someone Is Going to Die': Experts Warn Lawmakers over Self-driving Cars."
The Guardian. Guardian News and Media, 15 Mar. 2016. Web. 7 Sept. 2016.
Tillemann, Levi. The Great Race: The Global Quest for the Car of the Future. N.p.: n.p., n.d.
Print.
Naughton, Keith. "Humans Are Slamming Into Driverless Cars and Exposing a Key Flaw."
Bloomberg.com. Bloomberg, 17 Dec. 2015. Web. 06 Oct. 2016.
Vasic, Milos, and Aude Billard. "Safety issues in human-robot interactions."Robotics and
Automation (ICRA), 2013 IEEE International Conference on. IEEE, 2013.

Potrebbero piacerti anche