Filter results

48147 results

Theoretical research

The theoretical research of the Digital Ethics Centre is divided over three main themes: Design for Values Methods Moral Values Epistemic Values Applied projects 1. Design for Values Methods Digital technologies need to be designed and used responsibly, but how do we go about doing so? We believe that it is crucial to actively design for a range of values. But what are values, how do we identify and specify them, and how do we verify that a piece of technology embodies the relevant values? How do we deal with changing or conflicting values as part of the design process? Research on Conceptual Engineering, Meta-ethics and the Design for Values methodology helps to answer these questions underlying every applied project. Key publications Designing for Human Rights in AI (Big Data & Society, 2020) Aizenberg, E., & Van den Hoven, J. (2020). Designing for human rights in AI. Big Data & Society 7 (2), 1-14. In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence (AI) systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to these urgent challenges in recent years, technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context and the critical input of societal stakeholders who are impacted by the technology. On the other hand, calls for more ethically and socially aware AI often fail to provide answers for how to proceed beyond stressing the importance of transparency, explainability, and fairness. Bridging these socio-technical gaps and the deep divide between abstract value language and design requirements is essential to facilitate nuanced, context-dependent design choices that will support moral and social values. In this paper, we bridge this divide through the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design to present a roadmap for proactively engaging societal stakeholders to translate fundamental human rights into context-dependent design requirements through a structured, inclusive, and transparent process. Download paper How Do Technological Artefacts Embody Moral Values? (Philosophy & Technology, 2021) Klenk, M. (2021). How Do Technological Artefacts Embody Moral Values? Philosophy & Technology 34, 525-544. According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the circumstances. Based on an interdisciplinary perspective that invokes recent moral anthropology, I conceptualize affordances as response-dependent properties. That is, they depend on intrinsic as well as extrinsic properties of the artefact. We have reason to value these properties. Therefore, artefacts embody values and are not value-neutral, which has practical implications for the design of new technologies. Download paper AI Design and Governance (The State of AI Ethics Report 6, 2022) Klenk, M. (2022). AI Design and Governance. The State of AI Ethics Report 6, Montreal AI Ethics Institute, 150-152. "Another new addition to this report which builds on our push towards moving from principles to practice is the chapter on AI Design and Governance which has the goal of dissecting the entire ecosystem around AI and the AI lifecycle itself to gain a very deep understanding of the choices and decisions that lead to some of the ethical issues that arise in AI. It constitutes about one-sixth of the report and is definitely something that I would encourage you to read in its entirety to gain some new perspectives on how we can actualize Responsible AI." Download report Handbook of Ethics, Values, and Technological Design (Springer, 2015) Van den Hoven, J., Vermaas, P., & van de Poel, I. (2015). Handbook of Ethics, Values, and Technological Design. Springer, Netherlands. This handbook enumerates every aspect of incorporating moral and societal values into technology design, reflects the fact that the latter has moved on from strict functionality to become sensitive to moral and social values such as sustainability and accountability. Aimed at a broad readership that includes ethicists, policy makers and designers themselves, it proffers a detailed survey of how technological, and institutional, design must now reflect awareness of ethical factors such as sustainability, human well-being, privacy, democracy and justice, inclusivity, trust, accountability, and responsibility (both social and environmental). Edited by a trio of highly experienced academic philosophers with a specialized interest in the ethical dimensions of technology and human creativity, this syncretic handbook collates an array of published material and offers a studied, practical introduction to the field. The volume addresses myriad aspects at the intersection of technology design and ethics, enabling designers to adopt a constructive approach in anticipating, preventing, and resolving societal and ethical issues affecting their work. It covers underlying theory; discrete values such as democracy, human well-being, sustainability and justice; and application domains themselves, which include architecture, bio- and nanotechnology, and military hardware. As the first exhaustive survey of a field whose importance is characterized by almost exponential growth, it represents a compelling addition to a formerly atomized literature.​ More info Theme coordinators Dr. Michael Klenk Dr. Herman Veluwenkamp 2. Moral Values It is crucial to design digital technologies in line with moral values, to understand their societal implications and research the changes in our understanding of moral values due to technologies. How should we construe and realize values such as accountability, autonomy, democracy, fairness and privacy. We carry out philosophical research on different conceptions of these core values: when exactly is an instance of a digital technology fair? How should accountability be distributed when digital technologies are a central part of the decision making process? How does technology change our notion of autonomy? Research on moral values helps to answer questions that are central to designing responsible digital technologies. Key publications (Online) Manipulation: Sometimes Hidden, Always Careless (Review of Social Economy, 2022) Klenk, M. (2022). (Online) manipulation: sometimes hidden, always careless. Review of Social Economy 80, 2: 85-105. Ever-increasing numbers of human interactions with intelligent software agents, online and offline, and their increasing ability to influence humans have prompted a surge in attention toward the concept of (online) manipulation. Several scholars have argued that manipulative influence is always hidden. But manipulation is sometimes overt, and when this is acknowledged the distinction between manipulation and other forms of social influence becomes problematic. Therefore, we need a better conceptualisation of manipulation that allows it to be overt and yet clearly distinct from related concepts of social influence. I argue that manipulation is careless influence, show how this account helps to alleviate the shortcomings of the hidden influence view of manipulation, and derive implications for digital ethics. Download paper Machine Learning and Power Relations (AI & Society, 2022) Maas, J. (2022). Machine learning and power relations. AI & SOCIETY, 1-8. There has been an increased focus within the AI ethics literature on questions of power, reflected in the ideal of accountability supported by many Responsible AI guidelines. While this recent debate points towards the power asymmetry between those who shape AI systems and those affected by them, the literature lacks normative grounding and misses conceptual clarity on how these power dynamics take shape. In this paper, I develop a workable conceptualization of said power dynamics according to Cristiano Castelfranchi’s conceptual framework of power and argue that end-users depend on a system’s developers and users, because end-users rely on these systems to satisfy their goals, constituting a power asymmetry between developers, users and end-users. I ground my analysis in the neo-republican moral wrong of domination, drawing attention to legitimacy concerns of the power-dependence relation following from the current lack of accountability mechanisms. I illustrate my claims on the basis of a risk-prediction machine learning system, and propose institutional (external auditing) and project-specific solutions (increase contestability through design-for-values approaches) to mitigate domination. Download paper Enactive Principles for the Ethics of User Interactions on Social Media: How to Overcome Systematic Misunderstandings Through Shared Meaning-Making (Topoi, 2022) Marin, L. (2022). Enactive Principles for the Ethics of User Interactions on Social Media: How to Overcome Systematic Misunderstandings Through Shared Meaning-Making. Topoi, 1-13. This paper proposes three principles for the ethical design of online social environments aiming to minimise the unintended harms caused by users while interacting online, specifically by enhancing the users’ awareness of the moral load of their interactions. Such principles would need to account for the strong mediation of the digital environment and the particular nature of user interactions: disembodied, asynchronous, and ambiguous intent about the target audience. I argue that, by contrast to face to face interactions, additional factors make it more difficult for users to exercise moral sensitivity in an online environment. An ethics for social media user interactions is ultimately an ethics of human relations mediated by a particular environment; hence I look towards an enactive inspired ethics in formulating principles for human interactions online to enhance or at least do not hinder a user’s moral sensitivity. This enactive take on social media ethics supplements classical moral frameworks by asking us to focus on the relations established through the interactions and the environment created by those interactions. Download paper Meaningful Human Control over Autonomous Systems: A Philosophical Account (Frontiers in Robotics and AI, 2018) Santoni de Sio, F. & Van den Hoven, J. (2018). Meaningful Human Control over Autonomous Systems: A Philosophical Account. Frontiers in Robotics and AI, 5:15. Debates on lethal autonomous weapon systems have proliferated in the past 5 years. Ethical concerns have been voiced about a possible raise in the number of wrongs and crimes in military operations and about the creation of a “responsibility gap” for harms caused by these systems. To address these concerns, the principle of “meaningful human control” has been introduced in the legal–political debate; according to this principle, humans not computers and their algorithms should ultimately remain in control of, and thus morally responsible for, relevant decisions about (lethal) military operations. However, policy-makers and technical designers lack a detailed theory of what “meaningful human control” exactly means. In this paper, we lay the foundation of a philosophical account of meaningful human control, based on the concept of “guidance control” as elaborated in the philosophical debate on free will and moral responsibility. Following the ideals of “Responsible Innovation” and “Value-sensitive Design,” our account of meaningful human control is cast in the form of design requirements. We identify two general necessary conditions to be satisfied for an autonomous system to remain under meaningful human control: first, a “tracking” condition, according to which the system should be able to respond to both the relevant moral reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates; second, a “tracing” condition, according to which the system should be designed in such a way as to grant the possibility to always trace back the outcome of its operations to at least one human along the chain of design and operation. As we think that meaningful human control can be one of the central notions in ethics of robotics and AI, in the last part of the paper, we start exploring the implications of our account for the design and use of non-military autonomous systems, for instance, self-driving cars. Download paper Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address them (Philosophy & Technology, 2021) Santoni de Sio, F., & Mecacci, G. (2021). Four Responsibility Gaps with Artificial Intelligence: Why They Matter and How to Address them. Philosophy & Technology 34, 1057-1084. The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities. Download paper Moving out of the Human Vivarium: Live-in Laboratories and the Right to Withdraw (under review) Mollen, J. (under review). Moving out of the Human Vivarium: Live-in Laboratories and the Right to Withdraw. Evil Online (Wiley-Blackwell, 2018) Cocking, Dean & Jeroen van den Hoven (2018). Evil Online. Hoboken, New Jersey: Wiley-Blackwell. We now live in an era defined by the ubiquity of the internet. From our everyday engagement with social media to trolls on forums and the emergence of the dark web, the internet is a space characterized by unreality, isolation, anonymity, objectification, and rampant self-obsession—the perfect breeding ground for new, unprecedented manifestations of evil. Evil Online is the first comprehensive analysis of evil and moral character in relation to our increasingly online lives. Chapters consider traditional ideas around the phenomenon of evil in moral philosophy and explore how the dawn of the internet has presented unprecedented challenges to older theoretical approaches. Cocking and Van den Hoven propose that a growing sense of moral confusion—moral fog—pushes otherwise ordinary, normal people toward evildoing, and that values basic to moral life such as autonomy, intimacy, trust, and privacy are put at risk by online platforms and new technologies. This new theory of evildoing offers fresh insight into the moral character of the individual, and opens the way for a burgeoning new area of social thought. A comprehensive analysis of an emerging and disturbing social phenomenon, Evil Online examines the morally troubling aspects of the internet in our society. Written not only for academics in the fields of philosophy, psychology, information science, and social science, Evil Online is accessible and compelling reading for anyone interested in understanding the emergence of evil in our digitally-dominated world. More info + excerpt Theme coordinators Dr. Filippo Santoni De Sio Dr. Lavinia Marin 3. Epistemic Values Digital technologies provide us with large amounts of new information. How should we interact with this wealth of information? When can we rely on these technologies and under which conditions do we acquire knowledge while using them? How can we make them more transparent and explainable? What information do users need to contest decisions based on automated systems? Research on epistemic values looks at the knowledge-related questions that digital technologies give rise to. Our research helps to set standards on the information that is provided to human users by digital technologies, but also tells us what information is needed to responsibly use, evaluate or overrule digital technologies. Key publications Defining Explanation and Explanatory Depth in XAI (under review) Buijsman, S. (under review). Defining explanation and explanatory depth in XAI. Spotting When Algorithms Are Wrong (Minds & Machines, 2022) Buijsman, S., & Veluwenkamp, H. (2022). Spotting When Algorithms Are Wrong. Minds & Machines, 1-22. Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions and violations of the norm of practical reasoning. To prevent this from occurring we suggest the implementation of defeaters: information that a system is unreliable in a specific case (undercutting defeat) or independent information that the output is wrong (rebutting defeat). Practically, we suggest to design defeaters based on the different ways in which a system might produce erroneous outputs, and analyse this suggestion with a case study of the risk classification algorithm used by the Dutch tax agency. Download paper Dissecting Scientific Explanation in AI (sXAI): A Case for Medicine and Healthcare (Artificial Intelligence, 2021) Durán, J. (2021). Dissecting Scientific Explanation in AI (sXAI): A Case for Medicine and Healthcare. Artificial Intelligence 297, 103498. Explanatory AI (XAI) is on the rise, gaining enormous traction with the computational community, policymakers, and philosophers alike. This article contributes to this debate by first distinguishing scientific XAI (sXAI) from other forms of XAI. It further advances the structure for bona fide sXAI, while remaining neutral regarding preferences for theories of explanations. Three core components are under study, namely, i) the structure for bona fide sXAI, consisting in elucidating the explanans , the explanandum , and the explanatory relation for sXAI: ii) the pragmatics of explanation, which includes a discussion of the role of multi-agents receiving an explanation and the context within which the explanation is given; and iii) a discussion on Meaningful Human Explanation , an umbrella concept for different metrics required for measuring the explanatory power of explanations and the involvement of human agents in sXAI. The kind of AI systems of interest in this article are those utilized in medicine and the healthcare system. The article also critically addresses current philosophical and computational approaches to XAI. Amongst the main objections, it argues that there has been a long-standing interpretation of classifications as explanation, when these should be kept separate. More info Who Is Afraid of Black-Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI (Journal of Medical Ethics, 2021) Durán, J. & Jongsma, K. (2021). Who Is Afraid of Black-Box Algorithms? On the Epistemological and Ethical Basis of Trust in Medical AI. Journal of Medical Ethics 47, 329-335 The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that computational processes are indeed methodologically opaque to humans, we argue that the reliability of algorithms provides reasons for trusting the outcomes of medical artificial intelligence (AI). To this end, we explain how computational reliabilism , which does not require transparency and supports the reliability of algorithms, justifies the belief that results of medical AI are to be trusted. We also argue that several ethical concerns remain with black box algorithms, even when the results are trustworthy. Having justified knowledge from reliable indicators is, therefore, necessary but not sufficient for normatively justifying physicians to act. This means that deliberation about the results of reliable algorithms is required to find out what is a desirable action. Thus understood, we argue that such challenges should not dismiss the use of black box algorithms altogether but should inform the way in which these algorithms are designed and implemented. When physicians are trained to acquire the necessary skills and expertise, and collaborate with medical informatics and data scientists, black box algorithms can contribute to improving medical care. Download paper Sharing (Mis) information on Social Networking Sites. An Exploration of the norms for Distributing Content Authored by Others (Ethics and Information Technology, 2021) Marin, L. (2021). Sharing (Mis) information on Social Networking Sites. An Exploration of the norms for Distributing Content Authored by Others. Ethics and Information Technology, 23(3), 363-372. This article explores the norms that govern regular users’ acts of sharing content on social networking sites. Many debates on how to counteract misinformation on Social Networking Sites focus on the epistemic norms of testimony, implicitly assuming that the users’ acts of sharing should fall under the same norms as those for posting original content. I challenge this assumption by proposing a non-epistemic interpretation of (mis) information sharing on social networking sites which I construe as infrastructures for forms of life found online. Misinformation sharing belongs more in the realm of rumour spreading and gossiping rather than in the information-giving language games. However, the norms for sharing cannot be fixed in advance, as these emerge at the interaction between the platforms’ explicit rules, local norms established by user practices, and a meta-norm of sociality. This unpredictability does not leave us with a normative void as an important user responsibility still remains, namely that of making the context of the sharing gesture explicit. If users will clarify how their gestures of sharing are meant to be interpreted by others, they will implicitly assume responsibility for possible misunderstandings based on omissions, and the harms of shared misinformation can be diminished. Download paper Informativeness and Epistemic Injustice in Explanatory Medical Machine Learning (under review) Pozzi, G. and Durán, Juan M. (under review). Informativeness and Epistemic Injustice in Explanatory Medical Machine Learning. Automated Opioid Risk Scores: A Case for Machine Learning-Induced Epistemic Injustice in Healthcare (under review) Pozzi, G. (under review). Automated Opioid Risk Scores: A Case for Machine Learning-Induced Epistemic Injustice in Healthcare. Theme coordinators Dr. Stefan Buijsman Dr. Juan Durán

PowerWeb Annual Conference

Annual TU Delft PowerWeb Institute Conference | June 20, 2022 | TU Delft X This year’s annual conference was successfully organized by the faculty of Industrial Design and Engineering . This year's edition focused on the role research and industry can take in bridging the societal and technical aspects of the energy transition: in other words, how can we ensure the people and organizations affected by the energy transition and/or who use the technology be more involved in the decision making process? And how can we ensure the technology reflects the needs of the people who use it? We had two powerful keynotes by Drs David Shipworth and Caroline Nevejan . Dr. Shipworth discussed the importance of ensuring decisions around the energy transition are inclusive, the need to engage policy makers, and that choosing the most effective energy transition path that will deliver and meet the needs of people over the most “optimal” and “efficient” one. Dr. Nevejan discussed how open information was a critical aspect in catalizing the energy transition and the need for synergy between research and policy making. Session Summaries The session ‘Exploring the Social and Technical Challenges of the Energy Grid in Amsterdam South-East’ gave the audience insight into the project objectives and the technical and social challenges we encounter while designing the LIFE platform. During an interactive panel discussion people asked about the panel’s approach on various topics such as the role of energy poverty in our research, targets for participation and the estimated value of the platform for the local community. During the session 'Integrating Sustainable Energy in Households' , a diverse group of people with both organisational and technical backgrounds came together to develop ideas on how to integrate sustainable energy in households. Novel and insightful ideas focused on involving residents and end-users more closely into renovation and design processes, and on adapting feedback, education and financial incentives to foster participation by all. In the ‘Energising with data workshop’ , the technical audience took up the challenge of discussing energy and sustainability in the school context. Everyone actively engaged in getting into the shoes of 15-year-old children, who are the new citizens, (future) ambassadors, makers, and decision-makers. We reflected on the challenge of engaging teenagers in the energy challenges through data: a new source of information to rely on, a new source of information to be critical with (not blindly trusting often noisy and incomplete sensor data). We asked: how does this view of the building contrast with the inhabitant's feelings and perception, and distention within the inhabitants (e.g. children in the classroom)? Energy literacy also comes with data literacy. Data can reveal how small an impact an individual action can bring in contrast with the (high) effort, which can also discourage and backfire (people jumping to the conclusion that what they do can’t change anything). During the workshop ‘Applying co-creation in the Dutch Energy Transition’ two groups of around 10 people experienced that creativity feeds on interaction! The interactive part consisted of a pressure cooker brainstorm session on ‘designing for social contagion’. In smaller groups, thoughts were exchanged, ideas build upon, new ones were co-created, and people drew, wrote and spoke enthusiastically about them. Best idea of the session? ;-) Energy related tattoos as an introverted (non-pushy) way to spread the message! The flashtalks covered several areas of the energy transition and involved both researchers and practiotioners covering research outcomes and idea pitches. Discussions followed after the flashtalks in the foyer during the extended break. Video recordings of the sessions can be found here . Visual notes can be found here . Go to the TU Delft PowerWeb Institute home page Photo's Visual Notes Keynote Speakers Program Information

Half Height Horizontal

Students Amos Yusuf, Mick Dam & Bas Brouwer winners of Mekel Prize 2024

Master students Amos Yusuf, from the ME faculty (Mick Dam, from the EEMCS faculty and graduate Bas Brouwer have won the Mekel Prize 2024 for the best extra scientific activity at TU Delft: the development of an initiative that brings master students into the classroom teaching sciences to the younger generations. The prize was ceremonially awarded by prof Tim van den Hagen on 13 November after the Van Hasselt Lecture at the Prinsenhof, Delft. They received a statue of Professor Jan Mekel and 1.500,- to spend on their project. Insights into climate change are being openly doubted. Funding for important educational efforts and research are being withdrawn. Short clips – so called “reels” – on Youtube and TikTok threaten to simplify complex political and social problems. AI fakes befuddle what is true and what is not. The voices of science that contribute to those discussion with modesty, careful argument and scepticism, are drowned in noise. This poses a threat for universities like TU Delft, who strive to increase student numbers, who benefit from diverse student populations and aim to pass on their knowledge and scientific virtues to the next generation. It is, therefore, alarming that student enrolments to Bachelor and Master Programs at TU Delft have declined in the past year. Students in front of the class The project is aimed to make the sciences more appealing to the next generation. They have identified the problem that students tend miss out on the opportunity of entering a higher education trajectory in the Beta sciences – because they have a wrong picture of such education. In their mind, they depict it as boring and dry. In his pilot lecture at the Stanislas VMBO in Delft, Amos Yusuf has successfully challenged this image. He shared his enthusiasm for the field of robotics and presented himself as a positive role model to the pupils. And in return the excitement of the high school students is palpable in the videos and pictures from the day. The spark of science fills their eyes. Bas Brouwer Mick Dam are the founders of NUVO – the platform that facilitates the engagement of Master Students in high school education in Delft Their efforts offer TU Delft Master Students a valuable learning moment: By sharing insights from their fields with pupils at high school in an educational setting, our students can find identify their own misunderstandings of their subject, learn to speak in front of non-scientific audiences and peak into education as a work field they themselves might not have considered. An extraordinary commitment According to the Mekel jury, the project scored well on all the criteria (risk mitigation, inclusiveness, transparency and societal relevance). However, it was the extraordinary commitment of Amos who was fully immersed during his Master Project and the efforts of Brouwer and Dam that brought together teaching and research which is integral to academic culture that made the project stand out. About the Mekel Prize The Mekel Prize will be awarded to the most socially responsible research project or extra-scientific activity (e.g. founding of an NGO or organization, an initiative or realization of an event or other impactful project) by an employee or group of employees of TU Delft – projects that showcase in an outstanding fashion that they have been committed from the beginning to relevant moral and societal values and have been aware of and tried to mitigate as much as possible in innovative ways the risks involved in their research. The award recognizes such efforts and wants to encourage the responsible development of science and technology at TU Delft in the future. For furthermore information About the project: https://www.de-nuvo.nl/video-robotica-pilot/ About the Mekel Prize: https://www.tudelft.nl/en/tpm/our-faculty/departments/values-technology-and-innovation/sections/ethics-philosophy-of-technology/mekel-prize

New catheter technology promises safer and more efficient treatment of blood vessels

Each year, more than 200 million catheters are used worldwide to treat vascular diseases, including heart disease and artery stenosis. When navigating into blood vessels, friction between the catheter and the vessel wall can cause major complications. With a new innovative catheter technology, Mostafa Atalla and colleagues can change the friction from having grip to completely slippery with the flick of a switch. Their design improves the safety and efficiency of endovascular procedures. The findings have been published in IEEE. Catheter with variable friction The prototype of the new catheter features advanced friction control modules to precisely control the friction between the catheter and the vessel wall. The friction is modulated via ultrasonic vibrations, which overpressure the thin fluid layer. This innovative variable friction technology makes it possible to switch between low friction for smooth navigation through the vessel and high friction for optimal stability during the procedure. In a proof-of-concept, Atalla and his team show that the prototype significantly reduces friction, averaging 60% on rigid surfaces and 11% on soft surfaces. Experiments on animal aortic tissue confirm the promising results of this technology and its potential for medical applications. Fully assembled catheters The researchers tested the prototype during friction experiments on different tissue types. They are also investigating how the technology can be applied to other procedures, such as bowel interventions. More information Publicatie DOI : 10.1109/TMRB.2024.3464672 Toward Variable-Friction Catheters Using Ultrasonic Lubrication | IEEE Journals & Magazine | IEEE Xplore Mostafa Atalla: m.a.a.atalla@tudelft.nl Aimee Sakes: a.sakes@tudelft.nl Michaël Wiertlewski: m.wiertlewski@tudelft.nl Would you like to know more and/or attend a demonstration of the prototype please contact me: Fien Bosman, press officer Health TU Delft: f.j.bosman@tudelft.nl/ 0624953733

A key solution to grid congestion

On behalf of the TU Delft PowerWeb Institute, researchers Kenneth Brunninx and Simon Tindemans are handing over a Position Paper to the Dutch Parliament on 14 November 2024, with a possible solution to the major grid capacity problems that are increasingly cropping up in the Netherlands. The Netherlands is unlikely to meet the 2030 climate targets, and one of the reasons for this is that large industry cannot switch to electricity fast enough, partly because of increasingly frequent problems around grid capacity and grid congestion. In all likelihood, those problems will actually increase this decade before they can decrease, the researchers argue. The solution offered by the TU Delft PowerWeb Institute researchers is the ‘flexible backstop’. With a flexible backstop, the current capacity of the power grid can be used more efficiently without sacrificing safety or reliability. A flexible backstop is a safety mechanism that automatically and quickly reduces the amount of electricity that an electric unit can draw from the grid (an electric charging station or a heat pump) or deliver (a PV installation). It is a small device connected or built into an electrical unit, such as a charging station or heat pump, that ‘communicates’ with the distribution network operator. In case of extreme stress on the network, the network operator sends a signal to the device to limit the amount of power. Germany recently introduced a similar system with electric charging stations. The backstop would be activated only in periods of acute congestion problems and could help prevent the last resort measure, which is cutting off electricity to users. ‘Upgrading the electricity network remains essential, but in practice it will take years. So there is a need for short-term solutions that can be integrated into long-term planning. We, the members of the TU Delft PowerWeb Institute, call on the government, network operators and regulator to explore the flexible backstop as an additional grid security measure,’ they said. The entire Paper can be read here . Kenneth Brunninx Associate Professor at the Faculty of Engineering, Governance and Management, where he uses quantitative models to evaluate energy policy and market design with the aim of reducing CO2 emissions. Simon Tindemans is Associate Professor in the Intelligent Electrical Power Grids group at Faculty of Electrical Engineering, Mathematics and Computer Science. His research interests include uncertainty and risk management for power grids. TU Delft PowerWeb Institute is a community of researchers who are investigating how to make renewable energy systems reliable, future proof and accessible to everyone.

25 year celebration of formal collaboration between Delft University of Technology and the University of Campinas

On 25 October 2024 we celebrated 25 years of formal collaboration between Delft University of Technology and the University of Campinas. What began as a project to exchange some students in chemical engineering has now grown to a multifaceted and broad academic collaboration which accumulated into 24 joint research projects (>20 M Euro); 16 advanced courses and 15 Doctors with a Dual Degree PhD. Patricia Osseweijer, TU Delft Ambassador Brazil explained, “We are proud to show and reflect on this special day the added value we created resulting from our joint activities. The lessons we learned demonstrate that especially continuity of funds and availability for exchanges has contributed to joint motivation and building trust which created strong relations. This is the foundation for academic creativity and high-level achievements.” The program presented showcases of Dual Degree projects; research activities and education. It discussed the future objectives and new fields of attention and agree on the next steps to maintain and strengthen the foundation of strong relations. Telma Franco, Professor UNICAMP shared that “joint education and research has substantially benefitted the students, we see that back in the jobs they landed in,” while UNICAMP’s Professor Gustavo Paim Valenca confirmed that “we are keen to extend our collaboration to more engineering disciplines to contribute jointly to global challenges” Luuk van der Wielen highlighted that “UNICAMP and TU Delft provide valuable complementary expertise as well as infrastructures to accelerate research and innovation. Especially our joint efforts in public private partnerships brings great assets” To ensure our future activities both University Boards have launched a unique joint program for international academic leadership. This unique 7-month program will accommodate 12 young professors, 6 from each university. The programme began on 4 November 2024 in Delft, The Netherlands.