WMC_e7 第七輯

You Can Say “Talk to Me”
你可以說「告訴我」

2021-09-07 21:00
Lauren Lee McCarthy (Moderator: Winnie Soon)

You can say turn off the lights. You can say wake me up at 7am. You can say talk to me. I am captivated by the ways we are taught to interact with algorithms, and how this shapes the way we interact with each other. Central to my work is a critique of the simultaneous technological and social systems we’re building around ourselves. What are the rules, what happens when we introduce glitches? I invite participants. To remote control my dates. To be followed. To welcome me in as their human smart home. To attend a party hosted by artificial intelligence. In these interactions, there is a reciprocal risk taking and vulnerability, as performer and audience are both challenged to relinquish control, both implicated. We must formulate our own opinions about the systems that govern our lives. We begin to notice their effects play out on our identity, relationships, and society. Each work feels like an attempt to hack my way out of myself and into closeness with others. I am embodying machines, trying to understand that distance between the algorithm and myself, the distance between others and me. There’s humor in the breakdown, and also moments of clarity. Who builds these artificial systems, what values do they embody? Who is prioritized and who is targeted as race, gender, disability, and class are programmatically encoded? Where are the boundaries around our intimate spaces? In the midst of always on networked interfaces, what does it mean to be truly present?

你可以說關燈。你可以說早上 7 點叫醒我。你可以說和我談談。我著迷於演算法的互動學習,亦被這些塑造彼此相處的方式所吸引。我工作的核心是對人類構建的同步技術和社會系統作出批判。有那些規則在不斷運作?(系統)故障被引入時我們會如何?我邀請參與者去遙距控制我的約會對象,被追蹤,歡迎我隨時進入成為他們的智能家居,去參加由人工智能主辦的派對。在這些互動中,存在著相互的冒險和易受傷害,因為表演者和觀眾相互面臨著放棄控制的挑戰,未能獨善其身。我們必須對支配我們生活的系統建構自己的看法,我們會慢慢注意到它們對我們的身份、關係和社會的影響。每件作品都像是我成為機器的化身,試圖理解演算法與我之間的距離,理解我個人與別人之間的距離。這個分解的過程是不無幽默感的,也有事物漸形清晰的片刻。誰建造了這些人工系統,它們體現了甚麼價值?當中有優先次序嗎?有那些種族、性別、殘疾和階級尤被針對而化成編碼編程方式的潛在指向嗎?我們私密空間的界限在那裡?當我們時刻在線上時,甚麼才算真正在場?

|About the Speaker  關於講者

* Image Credit: Cam McLeod

Lauren Lee McCarthy is an artist examining social relationships in the midst of surveillance, automation, and algorithmic living. She is a 2020 Sundance New Frontier Story Lab Fellow, 2020 Eyebeam Rapid Response Fellow, 2019 Creative Capital Grantee, and has been a resident at Eyebeam, ZERO1, CMU STUDIO for Creative Inquiry, Autodesk, NYU ITP, and Ars Electronica. Her work SOMEONE was awarded the Ars Electronica Golden Nica and the Japan Media Arts Social Impact Award, and her work LAUREN was awarded the IDFA DocLab Award for Immersive Non-Fiction. Lauren’s work has been exhibited internationally, at places such as the Barbican Centre, Fotomuseum Winterthur, Haus der elektronischen Künste, SIGGRAPH, Onassis Cultural Center, IDFA DocLab, Seoul Museum of Art, and the Japan Media Arts Festival. She is the creator of p5.js, an open source programming language for learning creative expression through code online. She helps direct the Processing Foundation, a non-profit whose mission is to promote software literacy within the visual arts, and visual literacy within technology-related fields—and to make these fields accessible to diverse communities. Lauren is an Associate Professor at UCLA Design Media Arts.

Lauren Lee McCarthy作為藝術家,關注在監控、自動化和演算法生活中的社會關係。她是 2020 年辛丹斯新前沿故事實驗室研究員、2020 年 Eyebeam 急速回應研究員、2019 年創意資本受助人,並曾在 Eyebeam、ZERO1、CMU 創意諮詢工作室、Autodesk、紐約大學 ITP 和 Ars Electronica 駐留。她的作品《SOMEONE》獲得了Ars Electronica 數碼藝術金獎及日本媒體藝術社會影響獎,她的作品《LAUREN》獲得 IDFA DocLab 沉浸式非小說類獎。McCarthy的作品在多國廣泛展出,如倫敦的巴比肯中心、溫特圖爾攝影博物館、巴塞爾當代博物館、SIGGRAPH、歐納西斯文化中心、IDFA DocLab、首爾藝術博物館和日本媒體藝術節。她是 p5.js 的創建者;p5.js是一種開源編程語言,予用家在網上學習創意表達。她協作指導非盈利組織 Processing Foundation,目的促進視覺藝術領域的軟件素養和技術相關領域的視覺素養——並使這些領域能夠為不同的社區所用。McCarthy是加州大學洛杉磯分校設計媒體藝術學院的副教授。


Governing through Immunity: Techniques and Politics of COVID-19 Management
免疫力的管治:COVID-19 管理的技術和政治

2021-09-13 20:30
Btihaj Ajana (Moderator: Daisy DS Tam)

Since the outbreak of Covid-19 was officially declared as global pandemic in 2020, many countries around the world have rushed to implement myriad initiatives and technological solutions to manage and contain the spread of coronavirus. From tracking apps and facial recognition systems to electronic bracelets and vaccine passports, Covid-19 has certainly intensified the uptake and development of various digital technologies and surveillance techniques.

In this talk, Ajana examines the political rationalities underpinning the governance of Covid-19 as well as some of the technological dispositifs that have been put to work to manage the pandemic. Building on the work of Roberto Esposito as well as literature from the philosophy of science, Ajana argues that much of the technologically mediated governance of Covid-19 is taking place through the prism of “immunity”. Here, the concept of immunity is considered not only as a medical concept or a biological reality but also as a biopolitical rationality that structures the boundaries between self and other, between the inside and the outside. Following this, Ajana considers some of the ethical issues pertaining to the “immunitarian politics” of Covid-19.

自從在 2020 年新型冠狀病毒(Covid-19)被正式宣佈為全球大流行之後,許多國家都急於實施多樣的解決方案來管理和遏制病毒傳播。從跟踪應用程序、臉部識別系統、電子手環到疫苗護照,Covid-19 無疑加強了各種數碼科技和監控技術的採用和發展。

是次的主題演講,Btihaj Ajana 將闡述她對防疫工作背後的政治思維以及控制疫情的技術配置的研究。在 Roberto Esposito 的著作以及有關科學哲學文獻的基礎上,Ajana 發展出她的論據,指出大部分以科技為依的新型冠狀病毒管理皆透過「免疫」的棱鏡去理解和進行。在這裡,免疫的概念不僅被視為醫學概念或生物學的現實,亦被視為一種生命政治理性,建構出自我與他者、內部與外部的界限。講者希望深思 Covid-19 的「免疫政治」所涉及的一些倫理議題。

|About the Speaker  關於講者

Btihaj Ajana is Reader in Media and Digital Culture at the Department of Digital Humanities at King’s College London. Her interdisciplinary research focuses on the ethical, political and ontological aspects of digital developments and their intersection with everyday cultures. She is the author of Governing through Biometrics: The Biopolitics of Identity (2013) and editor of Self-Tracking: Empirical and Philosophical Investigations (2018) and Metric Culture: Ontologies of Self-Tracking Practices (2018). Ajana is also a filmmaker and uses film as a way of exploring social issues while bringing scholarly ideas to wider audiences. Her most recent films include Quantified Life (2017); Surveillance Culture (2017); and Fem’s Way (2020).

Btihaj Ajana 是倫敦國王學院數碼人文系媒體和數碼文化的研究員。她的跨學科研究側重於數碼發展的倫理、政治和本體論方面及其與日常文化的交集。她是 Governing through Biometrics: The Biopolitics of Identity(中文暫譯:生物辨識的管治:身份的生物政治)(2013)的作者和 Self-Tracking: Empirical and Philosophical Investigations(中文暫譯:自我追踪:實證和哲學研究)(2018)和 Metric Culture: Ontologies of Self-Tracking Practices(中文暫譯:量度文化:自我跟踪實踐下的本體)(2018)的編輯。 Ajana 亦是一名電影製作人,她將電影作為探索社會問題的一種方式,同時將學術思想帶給更廣泛的觀眾。最近的電影作品包括《量化生活》* (2017)、《監控文化》*(2017)和《女慾的方式》* (2020)。

*Chinese translation of film titles is temporary. 影片之中文名稱為暫譯。


The Incommensurability Between Human and Algorithmic Thought
思考和演算邏輯之間的不相稱

2021-09-15 20:30
Beatrice Fazi (Moderator: Damien Charrieras)

In this talk, Beatrice Fazi will present her recent research in the philosophy of artificial intelligence and consider contemporary AI research in deep learning. In the past decade, deep learning has been remarkably successful. Alongside this success, however, it has been commented that deep-learning methods operate in ways that are often opaque and illegible. The talk will address the black-box aspect of deep learning in order to reconceptualize the abstractive operations of these technologies. And it will propose such a reconceptualization by focusing on the issue of explainability in artificial intelligence and investigating the philosophical implications of computer programs being no longer constrained by the limits of human knowledge. This freedom will be understood as a form of autonomy from human modes of abstraction and will be related to questions about representation. The notion of ‘incommensurability’ will be introduced to discuss the discrepancy between the abstractive choices of humans and those of computing machines.

在是次演講中,Beatrice Fazi 將闡述她在人工智能哲學方面的研究工作,並思考在「深度學習」範疇裡當代人工智能研究的應用。深度學習在過去的十年雖取得驚人的成績,但其晦澀和難以說明的運作形式仍然垢人話柄。講座將聚焦深度學習的黑盒面向,以重新整合這類技術的抽象化的操作;這種理念上的重塑,重點在於人工智能的可解釋性,繼而檢視電算程式編碼的哲學含義,指出人工智能的認知已不再受人的知識所局限。這種人類認知以外的「自由」可被理解成一種擺脫人類抽像化模式的「自主」,而且與表徵的問題有關。講者會引進「不相稱」這概念,討論人類抽象化了的日常選擇以及計算機的抽象選擇之間的差異、無法對口。

|About the Speaker  關於講者



Beatrice Fazi
is a philosopher working on issues and questions generated by contemporary technoscience. She is Lecturer in Digital Humanities in the School of Media, Arts and Humanities at the University of Sussex (United Kingdom) and the author of the book Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics, published by Rowman & Littlefield International in 2018. Her research focuses on the role and scope of computation, thought and abstraction in the twenty-first century. She has written extensively on the ontological and epistemological implications of algorithmic decision-making, on digital aesthetics, on artificial intelligence and the automation of thought.

哲學家 Beatrice Fazi 致力於研究當代技術科學所衍生的議題。她是英國蘇塞克斯大學媒體、藝術和人文學院數碼人文學科的講師,也是 Contingent Computation: Abstraction, Experience, and Indeterminacy in Computational Aesthetics(中文暫譯:偶然的計算:計算美學中的抽象、經驗和不確定性)(萊曼和賴特菲爾德出版社;2018年出版)一書的作者。她的研究重點是二十一世紀計算、思維和抽象化的角色和範圍,同時撰寫大量揭示演算決策的本體論和認識論上的含義、以至在數碼美學、人工智能及自動化思想上可引來的調動。


Trustworthy Machine Learning
值得信賴的機器學習

* Please enter the password to view the video. (hint: 6 letters; first name of the speaker; small letters)

2021-09-20 19:30
Adrian Weller (Moderator: Héctor Rodriguez)

Machine Learning systems are being deployed in ways that offer great potential to benefit society, but this also raises important concerns. Dr Weller will discuss these concerns and suggest ways we can address them by working to make systems and our use of them, more deserving of well-earned trust.

機器學習系統為社會提供巨大的發展潛力,但同時也引起了重要的關注。Adrian Weller 將討論這些問題,並建議人們透過參與系統編程工作及具高信任度的應用,來應對這些挑戰。

|About the Speaker  關於講者

Adrian Weller is Programme Director for AI at The Alan Turing Institute, the UK national institute for Data Science and AI, and is also a Turing AI Fellow leading work on trustworthy Machine Learning (ML). He is a Principal Research Fellow in ML at the University of Cambridge, and at the Leverhulme Centre for the Future of Intelligence where he is Programme Director for Trust and Society. His interests span AI, its commercial applications and helping to ensure beneficial outcomes for society. He serves on several boards including the Centre for Data Ethics and Innovation. He is co-director of the European Laboratory for Learning and Intelligent Systems (ELLIS) programme on Human-centric Machine Learning, and a member of the World Economic Forum (WEF) Global AI Council. Previously, Weller held senior roles in finance.

Adrian Weller 是英國國家數據科學和人工智能研究所旗下的艾倫圖靈研究所之人工智能項目總監,也是圖靈人工智能研究團隊的一員,領導可信賴機器學習的研究工作。他是劍橋大學機器學習的首席研究員,也是勒沃休姆 未來智能中心的信任與社會項目總監。他的興趣涵蓋人工智能、其商業應用以及幫助以確保為社會帶來禆益。他出任多個董事會,包括數據道德與創新中心。他是歐洲學習和智能系統實驗室之以人為本機器學習計劃的聯合總監,也是世界經濟論壇 全球人工智能委員會的成員。此前,Weller在金融行業身居要職。


Playing with (or Being Played by) the Black Box: The Subjective Experience of Self-Presentation for Algorithmic Audiences
玩(或被玩弄)的黑盒子:演算法觀眾自我呈現的主觀體驗

2021-09-23 21:00
Frank Pasquale (Moderator: Héctor Rodriguez)

Scholars in law, computer science, and social science have critiqued the rise of “black box” AI, particularly that which produces results and judgments that are too complex to be explained in standard narrative forms (or even scientific notation). When transparency is impossible, how can the system be legitimate? One emerging answer is to permit the public at large to “play” with the system: to test what happens when different inputs are entered.

For example, a firm may use a black box hiring algorithm to analyze applicants’ faces on videochat, and their cover letters, to decide whom to interview. It may be impossible to explain how the machine vision behind the system matches the selected applicants to prototypes of ideal hires. However, the system might be released online, so that persons can practice what types of expression, speed of speech, or writing tends to give them a higher matching score, and what leads to lower scores. In other words, they can play with the AI, much as a person might play chess or go against a software program.

However, the question arises: who is playing whom? The human confronts the machine trying to obtain a higher score; but the machine’s “objective” might be framed as an effort to get the human to make the expressions or write in the style it most highly values. While this question of characterization might seem like a mere trick of words, it actually suggests some deep normative principles to guide the development of accountable AI. Avoiding alienation—including a sense of meaninglessness and powerlessness—will depend first on an empathetic, moral, and even aesthetic understanding of how persons experience their encounters with AI.

法律、計算機科學和社會科學領域的學者批評「黑盒子」人工智能的興起,尤其是那些產生過度複雜且無法以標準敘述形式(甚至科學符號)解釋結果及判斷的人工智能。當系統無法透明時,如何以法律規範?一個新興的答案是允許大眾「玩弄」系統:輸入不同內容去測試一下會發生甚麼。

例如,一家公司可能會使用黑盒子招聘演算法來分析申請人視頻上的交談中的面孔及其求職信,以決定誰會被挑選來參與面試。儘管系統背後的機器視覺無法解釋如何將入選的求職者與理想員工的原型相匹配,但其網上發佈布的結果卻讓人們可以練習某類型的表達、語速或寫作方式,以致預估到分數高低的準則。換句話說,大眾可以玩弄人工智能,就像一個人下棋或對抗軟件程序一樣。

然而,問題出現了:誰在玩弄誰?人類對抗機器以試圖獲得更高分數,但機器的「目標」可能被定義為讓人類做出表達或以它機器風格為主寫作的措舉。雖然這個描述似乎只是一個口頭上的把戲,但它實際上提出以一些深層的規範原則來指導可理解的人工智能的發展。避免異化——包括一種無意義和無能為力的感覺——首先取決於人們如何體驗他們與人工智能相遇的同理心、道德甚至審美理解。

|About the Speaker  關於講者

Frank Pasquale is an expert on the law of AI, algorithms, and machine learning. He is a Professor of Law at Brooklyn Law School, a Visiting Scholar at the AI Now Institute, an Affiliate Fellow at Yale University’s Information Society Project, and a member of the American Law Institute. He is co-editor-in-chief of the Journal of Cross-Disciplinary Research in Computational Law (CRCL), based in the Netherlands, and a member of an Australian Research Council (ARC) Centre of Excellence on Automated Decision-Making & Society (ADM+S). His book The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015) has been recognized as a landmark study on the law and political economy of information. His New Laws of Robotics: Defending Human Expertise in the Age of AI (Harvard University Press 2020) rethinks the political economy of automation, to promote human capacities as the irreplaceable center of an inclusive economy.

Frank Pasquale 是人工智能、演算法和機器學習定律方面的專家。他是布魯克林法學院的法學教授,AI Now 研究所的訪問學人,耶魯大學資訊社會項目的附屬研究員,以及美國法學院的成員。他是荷蘭計算法跨學科研究雜誌 (CRCL) 的聯合主編,也是澳大利亞研究委員會 (ARC) 自動化決策與社會卓越中心的成員 (ADM+S)。他的著作《黑箱社會:掌控金錢和信息的數據法則 》(哈佛大學出版社;2015年出版)被公認為關於資訊的法律和政治經濟學研究的里程碑式。他的New Laws of Robotics: Defending Human Expertise in the Age of AI(中文暫譯:新機器人法則:在人工智能時代捍衛人類專業技能)(哈佛大學出版社;2020 年出版)重新思考自動化的政治經濟學,以促進人類能力成為包容性經濟不可替代的中心。