2022 人本照護科技 week5

Page 1

!"#$%&'()*+!,-./01 !"# $%&'()*+,-.+/01


人類有什麼永遠不應該由人工智能取代? 人類有什麼永遠不應該由人工智能 取代?


人類 (HUMAN) 從哪個角度下手? 1演化或歷史 2分子生化成分 3心理因素或歷程 4社會關係、環境或結構 5超自然或靈界力量


人類有什麼永遠不應該由人工智能取代? 人類1(演化歷史) 人類2(分子生物) 人類3(心理特質) 人類4(社會關係)


你如何想像未來人工智能?



商業管理 生產力工具 客戶管理 人資 行銷 財務 資料

https://venturebeat.com/2017/04/23/113-enterprise-ai-companies-you-should-know/



人工智能 (artificial intelligence) “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience”

https://www.britannica.com/technology/artificial-intelligence https://www.teahub.io/viewwp/xiTTbJ_red-circuit-board/


人類有什麼永遠不應該由人工智能取代? 人類1(演化歷史)

(理論可能性)萬能/超級/通用人工智能

人類2(分子生物) 人類3(心理特質) 人類4(社會關係)

人工智能型態N種


通用電腦的 理論可能性 如何開始想這個問題? 什麼理論? 計算理論(computability theory) : About the question whether a function can be computed Turning machine (a mathematical model of computation) (1936) https://plato.stanford.edu/entries/turing-machine/

Alan Turing https://en.wikipedia.org/wiki/Turing_machine



通用電腦的 理論可能性 如何開始想這個問題? 什麼理論? 計算理論(computability theory) : About the question whether a function can be computed Turning machines (a mathematical model of computation) https://plato.stanford.edu/entries/turing-machine/

https://en.wikipedia.org/wiki/Turing_machine



人類有什麼永遠不應該由人工智能取代? 人類1(演化歷史)

(理論可能性)萬能/超級/通用人工智能

人類2(分子生物) 人類3(心理特質) 人類4(社會關係)

人工智能型態N種



人類有什麼永遠不應該由人工智能取代? 人類1(演化歷史)

(理論可能性)萬能/超級/通用人工智能

人類2(分子生物) 人類3(心理特質) 人類4(社會關係)

人工智能型態N種


Mind as Computer • 心智的計算主義(computational theories of mind) • Mind as a computational system; ‘compute’ • 心智的機器功能論(machine state functionalism) (更 強的立場) • The turing-machine related computational descriptions/properties constitute the mind • 多重可實現性 (multiple realizability) : computational descriptions/properties are independent of other types of descriptions/properties (physical, neural, psychological, social, and so on)




人類有什麼永遠不應該由人工智能取代? 人類1(演化歷史)

(理論可能性)萬能/超級/通用人工智能

人類2(分子生物) 人類3(心理特質) 人類4(社會關係) 心智=*Turing-machine related computational properties 計算心智的多重可實現性

人工智能型態N種


人類有什麼永遠不應該由人工智能取代? 人類3.1(心智的機器功能主義) 計算性質

(理論可能性)萬能/超級/通用 人工智能

計算心智的多重可實現性

人類1(演化歷史) 人類3(心理特質) 人類2(分子生物) 人類3.2(情感) 人類4(社會關係) 人類3.N(…)

實現計算的物理基礎


人工智能的硬體運算基礎 根據多重可實現性,計算性質、概念或種類 是獨立於物理或硬體基礎 量子計算與量子電腦卻挑戰了這樣的主張 什麼是”efficient algortithm”(專有名詞,有 特定定義)是依賴於運算模型或物質基礎

https://www.ibm.com/quantum-computing/what-is-quantum-computing/


https://www.ibm.com/quantum-computing/systems


https://www.ibm.com/quantum-computing/what-is-quantum-computing/


https://www.ibm.com/quantum-computing/what-is-quantum-computing/


https://www.ibm.com/quantum-computing/what-is-quantum-computing/



https://www.ibm.com/quantum-computing/what-is-quantum-computing/


https://www.ibm.com/quantum-computing/what-is-quantum-computing/


人工智能的硬體運算基礎 根據多重可實現性,計算性質、概念或種類 是獨立於物理或硬體基礎 量子計算與量子電腦卻挑戰了這樣的主張 什麼是efficient algortithm是依賴於運算模 型或物質基礎

https://www.ibm.com/quantum-computing/what-is-quantum-computing/


特別注意 1.1, 5.5 & 5.7


人類有什麼永遠不應該由人工智能取代? 人類3.1(心智的機器功能主義) 計算性質

古典計算通用 電腦

量子計算通用 電腦

計算心智的多重可實現性

人類1(演化歷史) 人類3(心理特質) 實現計算的物理基礎 人類2(分子生物) 人類3.2(情感) 人類4(社會關係) 人類3.N(…)

製成qubit的 物質基礎


一元主義與多元主義 20世紀中後出現的AI與心智的討論,很多都預設一元 主義的立場: 例如:心智是種計算性質或種類=心智的(固定不變的 )形上學本質=人類的 (固定不變的)形上學本質 多元主義: 人類是種複雜的存在,至少方法上需要從不同角度來 理解 形上學本質可能也是多元的、甚至可能是可變動的, 具有歷程性的/時間性的/歷史性的


人類有什麼永遠不應該由人工智能取代? 永遠? 如果人類個體與群體本身,就是某種型態的 動態系統 也就是說人類具有本質性的歷程面向或時間 面向,以及本質性的演變性, 則邏輯上我們有什麼基礎可做出這種永遠的 宣稱?


倫理道德的面向 應不應該取代? 人類 人工智能

https://medium.com/tomorrow-plus-plus/a-dozen-things-about-ai-ethics-4f9a5f3215a3


倫理道德的面向 應不應該取代的問題無法抽象地問 人類與人工智能之間的關係就已經 很複雜,要先釐清你所問的問題是 從哪一種排列組合切入? 不同的人類社會情境脈絡下,需要 考慮的各種限制不同,不可一概而 論 https://medium.com/tomorrow-plus-plus/a-dozen-things-about-ai-ethics-4f9a5f3215a3


2!345678


!"#$%&'




Protecting human autonomy Use of AI can lead to situations in which decision-making power could be transferred to machines. The principle of autonomy requires that the use of AI or other computational systems does not undermine human autonomy. In the context of health care, this means that humans should remain in control of health-care systems and medical decisions. Respect for human autonomy also entails related duties to ensure that providers have the information necessary to make safe, effective use of AI systems and that people understand the role that such systems play in their care. It also requires protection of privacy and confidentiality and obtaining valid informed consent through appropriate legal frameworks for data protection.

Should we replace humans as decision-makers with machines in some clinical context?


Ensuring transparency, explainability and intelligibility. AI technologies should be intelligible or understandable to developers, medical professionals, patients, users and regulators. Two broad approaches to intelligibility are to improve the transparency of AI technology and to make AI technology explainable. Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology and that such information facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used. AI technologies should be explainable according to the capacity of those to whom they are explained.

Should we replace humans as cognitive agents with machines in some clinical context?


Fostering responsibility and accountability. Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired performance. Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they can perform those tasks and that AI is used under appropriate conditions and by appropriately trained people. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. Human warranty requires application of regulatory principles upstream and downstream of the algorithm by establishing points of human supervision. If something goes wrong with an AI technology, there should be accountability. Appropriate mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms. Should we replace humans as blameworthy agents with machines in some clinical context?




!"#$%&'()*+!,-./01 !"#$%&'()*+,-.'/01 23.'45&678&9:;&1 <-=>'?@


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.