Robots behaving badly: legal responsibility in the age of artificial intelligence
Can a bot break the law? Who is liable if a self-driving car
crashes? Can a robot infringe copyright? Can a software developer
go to jail for the actions of a machine learning algorithm? This
talk will try to answer this and other questions regarding
arti
32 Minuten
Podcast
Podcaster
Beschreibung
vor 6 Jahren
Andres Guadamuz In 2014, the conceptual art collective
!Mediengruppe Bitnik programmed an autonomous online agent (a bot)
to purchase random items from the dark web with a weekly budget of
$100 USD in bitcoins. During his ongoing experience, the Random
Darknet Shopper has purchased jeans, generic Viagra, cigarettes,
collector’s coins, and instructions on how to hack a Coke vending
machine. But perhaps the most interesting item arrived in 2015,
when the artists received 10 yellow ecstasy pills inside a DVD
case. The police in the Swiss town of St Gallen confiscated the
machine, but later “released it” after prosecutors determined that
no crime had been committed as the possession was for the purpose
of an art project. With the rise of more sophisticated and
independent artificial intelligence, situations like the one above
will take place more often. Self-driving cars, smart contracts, IoT
devices, data mining bots, machine learning algorithms; technology
will be given autonomy to make decisions in various circumstances,
and sometimes these may prove to be illegal or illicit. What
happens when these autonomous agents break the law? Who is liable?
Is there even anyone liable? At the moment, the law has not given
much thought to infringement committed by AI, mostly because of
until now most autonomous agents were not sophisticated. But with
the growing presence of intelligent bots in all areas of life, we
will need to explore new solutions, or perhaps re-visit older
regimes. This presentation will explore potential legal pitfalls
regarding AI liability, and will look at various legal solutions
that we could explore to allocate liability.
!Mediengruppe Bitnik programmed an autonomous online agent (a bot)
to purchase random items from the dark web with a weekly budget of
$100 USD in bitcoins. During his ongoing experience, the Random
Darknet Shopper has purchased jeans, generic Viagra, cigarettes,
collector’s coins, and instructions on how to hack a Coke vending
machine. But perhaps the most interesting item arrived in 2015,
when the artists received 10 yellow ecstasy pills inside a DVD
case. The police in the Swiss town of St Gallen confiscated the
machine, but later “released it” after prosecutors determined that
no crime had been committed as the possession was for the purpose
of an art project. With the rise of more sophisticated and
independent artificial intelligence, situations like the one above
will take place more often. Self-driving cars, smart contracts, IoT
devices, data mining bots, machine learning algorithms; technology
will be given autonomy to make decisions in various circumstances,
and sometimes these may prove to be illegal or illicit. What
happens when these autonomous agents break the law? Who is liable?
Is there even anyone liable? At the moment, the law has not given
much thought to infringement committed by AI, mostly because of
until now most autonomous agents were not sophisticated. But with
the growing presence of intelligent bots in all areas of life, we
will need to explore new solutions, or perhaps re-visit older
regimes. This presentation will explore potential legal pitfalls
regarding AI liability, and will look at various legal solutions
that we could explore to allocate liability.
Weitere Episoden
32 Minuten
vor 6 Jahren
25 Minuten
vor 6 Jahren
30 Minuten
vor 6 Jahren
31 Minuten
vor 6 Jahren
55 Minuten
vor 6 Jahren
In Podcasts werben
Kommentare (0)