The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications, then explores questions around AI value alignment, well-being, safety and malicious uses, and considers the deployment of advanced assistants at a societal scale.
AI in current practice is deteriorating the authors' theoretical understanding of cognition rather than advancing and enhancing it, and this situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.
It is concluded that not everything techno-scientifically possible is ethically acceptable, nor is it possible to equate the intelligent machine programmed by algorithms with human beings capable of self-awareness, self-determination, thinking about their existence, and being aware of their uniqueness, among other vital differences.
This work proposes a post-training framework called Reinforcement Learning via Self-Play (RLSP), and proposes a theory as to why RLSP search strategy is more suitable for LLMs inspired by a remarkable result that says CoT provably increases computational power of LLMs.
This Essay proposes a new hierarchical model linking genes to vital rates, enabling us to critically reevaluate the DST and DTA in terms of their relationship to evolutionary genetic theories of aging (mutation accumulation and antagonistic pleiotropy).
It is concluded that biology’s advantage over engineering arises from better integration of subsystems, and four fundamental obstacles that roboticists must overcome are identified.
It is posited that integrating causal considerations is vital to facilitating meaningful physical interactions with the world and demystify misconceptions about causality in this context and present the outlook for future research.
The socio-technical nature of RAI limitations and the resulting necessity of producing socio-technical solutions are considered, bridging the gap between the theoretical considerations of RAI and on-the-ground processes that currently shape how AI systems are built.
A typology is proposed that distinguishes the different stages of the AI life-cycle, the high-level ethical principles that should govern their implementation, and the tools with the potential to foster compliance with these principles, encompassing both technical and conceptual resources.
This paper delves into the ethical implications of AI in the Metaverse through the analysis of real-world case studies, including Horizon Worlds, Decentraland, Roblox, Sansar, and Rec Room, revealing recurring concerns related to content moderation.
It is argued that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction.
The stability of universal nomenclatural systems is threatened by recent discussions asking for a fairer nomenclature, raising the possibility of bulk revision processes for “inappropriate” names.
It is shown that it is difficult to find one theory of disease that captures all paradigm cases of diseases, while convincingly excluding pregnancy, and that applying theories of disease to the case of pregnancy can illuminate inconsistencies and problems within these theories.
From a response to the call for concerted efforts to safeguard the authenticity of information in the age of AI, the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science is shared.
An abductive-deductive framework named Logic-Explainer is presented, which integrates LLMs with an external backward-chaining solver to refine step-wise natural language explanations and jointly verify their correctness, reduce incompleteness and minimise redundancy.
It is argued that the third wave of AI ethics rests on a turn towards a structural approach for uncovering ethical issues on a broader scale, often paired with an analysis of power structures that prevent the uncovering of these issues.
The key to unlocking the next frontier in holistic nursing care lies in nurses navigating the delicate balance between artificial intelligence and the core values of empathy and compassion.
A collaborative and interdisciplinary approach is needed to address planetary health challenges, including working across sectors and investing in research to understand better the complex interactions between human health and the environment.
The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels.
This work draws on a case study to warn against misuse through “CV inflation,” and argues for prudence by editors and restraint by scholars, inviting them to focus on quality, rather than the quantity of LTEs published.
The concept of suffering is broadened by including suffering that cannot be observed contemporaneously and the suffering of loved ones to avoid prolonged dying with severe suffering.
A bibliometric analysis of 286 research articles, identified from the Scopus and Web of Science databases, on AI ethics in services revealed that most of the research on AI ethics in services is from the United Kingdom, the U.S., and China.
This article proposes adequacy conditions for a representation in an LLM to count as belief-like and establishes four criteria that balance theoretical considerations with practical constraints, which together help lay the groundwork for a comprehensive understanding of belief representation in LLMs.
The case for moral entanglement in this context is made through exploration of participants' vulnerability, uncompensated risks and burdens, depth of relationship with the research team, and dependence on researchers in implanted neurotechnology trials.
The crucial role of medical educators in adapting to changes in ethical challenges introduced by AI is explored, ensuring that ethical education remains a central and adaptable component of medical curricula.
The merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem are discussed.
It is shown that observed 'irrationalities', preferences, and emotions stem from the necessity for the authors' early sensory systems to adapt and process information while considering the metabolic costs and internal states of the organism.
Article Galaxy Pages is a free service from Research Solutions, a company that offers access to content in collaboration with publishing partners, online repositories and discovery services.