The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications, then explores questions around AI value alignment, well-being, safety and malicious uses, and considers the deployment of advanced assistants at a societal scale.
AI in current practice is deteriorating the authors' theoretical understanding of cognition rather than advancing and enhancing it, and this situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science.
It is argued that traditional technocratic standards are failing to integrate normative considerations into biomedical translation, and a translational domain that moves beyond safety and efficacy toward anticipating how proposed technologies will be effective in society as it exists is needed.
This work proposes a post-training framework called Reinforcement Learning via Self-Play (RLSP), and proposes a theory as to why RLSP search strategy is more suitable for LLMs inspired by a remarkable result that says CoT provably increases computational power of LLMs.
It is argued that whether AI is conscious is less of a concern than the fact that AI can be considered conscious by users during human-AI interaction, because this ascription of consciousness can lead to carry-over effects on human-human interaction.
The socio-technical nature of RAI limitations and the resulting necessity of producing socio-technical solutions are considered, bridging the gap between the theoretical considerations of RAI and on-the-ground processes that currently shape how AI systems are built.
Recommendations from the Racial Equity, Diversity, and Inclusion (REDI) Task Force are described, commissioned by the Association of Bioethics Program Directors to prioritize and strengthen anti-racist practices in bioethics programmatic endeavors and to evaluate and develop specific goals to advance REDI.
It is concluded that biology’s advantage over engineering arises from better integration of subsystems, and four fundamental obstacles that roboticists must overcome are identified.
This paper considers a series of cases in which "suffering" is invoked and analyzes them in light of prominent theories of suffering to outline ethical hazards that arise as a result of imprecise usage of the concept and offer practical recommendations for avoiding them.
This paper delves into the ethical implications of AI in the Metaverse through the analysis of real-world case studies, including Horizon Worlds, Decentraland, Roblox, Sansar, and Rec Room, revealing recurring concerns related to content moderation.
It is posited that integrating causal considerations is vital to facilitating meaningful physical interactions with the world and demystify misconceptions about causality in this context and present the outlook for future research.
This Essay proposes a new hierarchical model linking genes to vital rates, enabling us to critically reevaluate the DST and DTA in terms of their relationship to evolutionary genetic theories of aging (mutation accumulation and antagonistic pleiotropy).
A typology is proposed that distinguishes the different stages of the AI life-cycle, the high-level ethical principles that should govern their implementation, and the tools with the potential to foster compliance with these principles, encompassing both technical and conceptual resources.
This article identifies and reconcile the ethical issues imposed on CME by climate change including the dispersion of related causes and effects, the transdisciplinary and transhuman nature of climate change, and the historic divorce of CME from the environment.
This perspective article focuses on a recent application of Artificial Intelligence (generative AI models, such as ChatGPT), which are based on machine learning and Natural Language Processing and are applied to Natural Language Processing.
It is argued that in order to enable emergent behaviors, digital twins should be designed to reconstruct the behavior of a physical twin by “dynamically assembling” multiple digital “components”.
From a response to the call for concerted efforts to safeguard the authenticity of information in the age of AI, the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science is shared.
The key to unlocking the next frontier in holistic nursing care lies in nurses navigating the delicate balance between artificial intelligence and the core values of empathy and compassion.
The stability of universal nomenclatural systems is threatened by recent discussions asking for a fairer nomenclature, raising the possibility of bulk revision processes for “inappropriate” names.
This article proposes adequacy conditions for a representation in an LLM to count as belief-like and establishes four criteria that balance theoretical considerations with practical constraints, which together help lay the groundwork for a comprehensive understanding of belief representation in LLMs.
The paper explores how bias, lack of transparency, and challenges in maintaining patient trust can undermine the effectiveness and fairness of AI applications in healthcare, and outlines pathways for achieving more responsible and inclusive AI implementation in healthcare.
It is argued that the third wave of AI ethics rests on a turn towards a structural approach for uncovering ethical issues on a broader scale, often paired with an analysis of power structures that prevent the uncovering of these issues.
The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels.
An abductive-deductive framework named Logic-Explainer is presented, which integrates LLMs with an external backward-chaining solver to refine step-wise natural language explanations and jointly verify their correctness, reduce incompleteness and minimise redundancy.
This essay in honor of ASQ's 70th volume surveys how technology-driven changes in scholarly publishing have introduced algorithmic management to organizational research and proposes a set of reforms to preserve the sacredness of craft and community at the core of the authors' scholarly work.
The merits of advocating for a collaborative, informed, and flexible regulatory approach that balances innovation with individual rights and public welfare, fostering a trustworthy AI-driven healthcare ecosystem are discussed.
A comprehensive overview of AI in medicine is provided, exploring its technical capabilities, practical applications, and ethical implications, and identifies several advantages of AI in medicine, including its ability to improve diagnostic accuracy, enhance surgical outcomes, and optimize healthcare delivery.
The transformation of educational environments from hierarchical instructionism to constructionist models that emphasize learner autonomy and interactive, creative engagement is discussed, providing insights for educators and policymakers seeking to harness digital innovations to foster adaptive, student-centered learning experiences.
Article Galaxy Pages is a free service from Research Solutions, a company that offers access to content in collaboration with publishing partners, online repositories and discovery services.